I have the following content of the .gitlab-ci.yml job:
stages:
- stage1
- stage2
job1:
stage: stage1
script:
- echo "Running default stage1, pipeline_source=$CI_PIPELINE_SOURCE"
job2:
stage: stage2
rules:
- if: $CI_PIPELINE_SOURCE == "push"
- when: always
script:
- echo "Running STAGE2! pipeline_source=$CI_PIPELINE_SOURCE"
when I commit this change to a merge-request branch, it seems two pipelines are being started.
Is this a known issue in gitlab? Or do I understand something wrong here?
GitLab creates pipelines both for your branch and for the merge request. This is an "expected"[1] feature of GitLab as a consequence of using rules:. (oddly enough, when using only/except, merge request pipelines will only happen when using only: - merge_requests).
If you simply want to disable the 'pipelines for merge requests' and only run branch pipelines, you can include the default branch pipelines template, which provides a workflow: that prevents pipelines for merge requests.
include:
- template: 'Workflows/Branch-Pipelines.gitlab-ci.yml'
Additionally, you can see this answer for a workflow that will prevent duplicates between the pipelines for merge requests and branch pipelines only when a merge request is open.
[1]: I've always found this to be a quirk of GitLab and, as an administrator of GitLab for hundreds of users, I've gotten this question many many times. So, you're not alone in being surprised by this 'expected feature'
You didn't do anything wrong. This is actually intended, though it's a weird side-effect of the fact that Merge Requests have their own pipeline contexts. So when you commit to a branch that's associated with a merge request, two pipelines start:
A branch-based pipeline, with no context of the merge request
A merge request pipeline, with all the merge request variables populated (this is called a "detached" pipeline)
You can control this behavior by using a workflow keyword in your pipeline. We use the following workflow settings on our repositories:
workflow:
rules:
- if: $CI_MERGE_REQUEST_IID
- if: $CI_COMMIT_TAG
- if: $CI_PIPELINE_SOURCE == "schedule"
- if: $CI_COMMIT_REF_PROTECTED == "true"
The above rules will prevent the branch pipelines from running unless the branch is a protected branch (I.e., you're merging into the main branch), a tagged commit (I.e., you're releasing code), or the pipeline has been scheduled. This means that when you commit to a MR, the branch-based pipeline (#1 from the above numbers) doesn't run, and you are left with one pipeline running.
Related
Is it possible to trigger another pipeline from the pipeline completion trigger if there is a failure in the triggering pipeline? Seems there is no configuration/property available by default as per the documentation. Just wanted to check whether there is any possible way with the pipeline completion trigger.
If the initial pipeline fails to trigger, all subsequent pipelines would logically fail to trigger. Try having your initial pipeline start with a stage that will never fail, and if that pipeline fails, you can set it to trigger the subsequent pipelines after the first one fails but gets triggered succesfully.
Is it possible to trigger another pipeline from the pipeline completion trigger if there is a failure in the triggering pipeline?
There is no such configuration/property available to achieve trigger another pipeline from the pipeline completion trigger if there is a failure in the triggering pipeline.
To resole this issue, you could try to add powershell task to use the REST API Builds - Queue:
POST https://dev.azure.com/{organization}/{project}/_apis/build/builds?api-version=6.1-preview.7
You could check this thread for the detailed scripts.
And set this powershell task with condition Only when a previous task has failed:
In this case, regardless of whether the previous task fails, the REST API will be called at the end of the pipeline to trigger the build.
I was able to manage my requirement through the pipeline completion trigger itself. It is possible if we define stages in the triggering pipeline. I'm posting the answer if someone else looking for the same approach.
Need to define the triggering pipeline definition with stages. Also, we need to make sure that at least one stage should be successful every time. I already have few stages defined and hence this is totally matching with my requirement.
Triggering pipeline YAML definition: (pipeline name: pipeline1)
trigger: none
pr: none
pool:
vmImage: 'ubuntu-latest'
stages:
- stage: stage_1
displayName: Stage-1
jobs:
- job: greeting
displayName: Greeting
steps:
- script: |
echo "Hello world!"
exit 1
- stage: stage_2
displayName: Stage-2
condition: always()
jobs:
- job: thanking
displayName: Thanking
steps:
- script: |
echo "Thank you!"
Define the pipeline completion trigger with stage filters for the triggered pipeline.
Triggered pipeline YAML definition:
trigger: none
pr: none
resources:
pipelines:
- pipeline: Pipeline_1
source: pipeline1
trigger:
stages:
- stage_2
pool:
vmImage: 'ubuntu-latest'
jobs:
- job: greeting
steps:
- script: |
echo "Hello world!"
Then the triggered pipeline will be triggered irrespective to the stage_1 in the triggering pipeline since stage_2 will be kept successful in each run.
I'm aware that is possible to trigger another pipeline from another project by adding the below commands in a gitlab-ci file:
bridge:
stage: stage_name_here
trigger:
project: path_to_another_project
branch: branch_name
strategy: depend
The problem is that the config above will trigger all jobs, and I want to trigger only 2 jobs within the pipeline.
Any idea on how to trigger only those 2 specific jobs?
You can play with rules, only and except keywords in triggered ci file, for example add this to jobs that you want to trigger:
except:
variables:
- $CI_PROJECT_ID != {{ your main project id }}
And for jobs you don't want trigger this:
except:
variables:
- $CI_PROJECT_ID == {{ your main project id }}
Or if you want use rules, add this to jobs you want to run in main project:
rules:
- if: $CI_PROJECT_ID == {{ your main project id }}
when: never
- when: always
Instead of defining a variable that needs to be passed by the upstream pipeline that is triggering the downstream pipeline, I simply added the lines below in the jobs that I don't want to run in the downstream pipeline when triggered by another job:
except:
refs:
- pipelines
source:
https://docs.gitlab.com/ee/ci/triggers/index.html#configure-cicd-jobs-to-run-in-triggered-pipelines
I have a GitLab project pipeline that triggers a downstream pipeline, GitLab multi-project pipelines.
image: docker
trigger-docs:
trigger:
project: my-group/docs
branch: feat/my-feature-branch
Is there a way for the triggered pipeline in my-group/docs to find out where it was triggered from? I checked the predefined CI variables but none seems to carry this information.
Could it be that my only option is to pass a dedicated variable from the upstream project as documented at https://docs.gitlab.com/ee/ci/pipelines/multi_project_pipelines.html#pass-cicd-variables-to-a-downstream-pipeline-by-using-the-variables-keyword?
Here's the workaround we have been using for months now; send along the custom UPSTREAM_PROJECT variable.
# Trigger a downstream build https://docs.gitlab.com/ee/ci/pipelines/multi_project_pipelines.html
docs-build:
stage: .post
variables:
UPSTREAM_PROJECT: $CI_PROJECT_PATH
# Variable expansion for 'trigger' or 'trigger:project' does not seem to be supported. If we wanted this we would have
# to work around it like so: https://gitlab.com/gitlab-org/gitlab/-/issues/10126#note_380343695
trigger: my-group/docs
I've enabled GitLab container scanning by importing the template Security/Container-Scanning.gitlab-ci.yml and adding a container_scanning block
container_scanning:
stage: compliance
variables:
DOCKER_IMAGE: $MY_REPO/$CI_PROJECT_NAME:$CI_COMMIT_SHA
DOCKER_HOST: "tcp://localhost:2375"
However, I would like container_scanning job to only execute for develop branch, but the template itself defines a rule block, which prevents me from defining an only block.
Does anyone know how I can enable the container_scanning job to extend/override the rules block such that it will execute only when a commit is pushed to develop branch?
Since the template uses rules: you will have to use rules: to change the behavior of when the job is included in the pipeline.
container_scanning:
rules:
- if: '$CI_COMMIT_BRANCH == "develop"'
# ...
When you introduce your own rules: key, it overrides the existing rules: array entirely.
I have an Azure DevOps Pipeline for a Git repository. I currently have a script to validate the PR comments in the Azure Pipeline.
When the code is merged into the main branch I want to trigger a build. I am not sure how to achieve this with a Azure DevOps pipeline.
#Trigger for Development
trigger:
branches:
include:
- development
- master
#Trigger checks for PR
pr:
branches:
include:
- development
- master
- feature
- main
paths:
exclude:
- README/*
When the code is merged into the main branch I wanted to trigger build
If you want to verify the comments after the code is merged into the main branch, we need to trigger the build after the PR completed instead of when PR is created.
So, the PR triggers could not meet our requirement in this case.
To resolve this issue, we could enable CI triggers for the main branch with ** condition** eq(variables['Commitcomment'], 'Merge pull request') for the task of script to validate the PR comments.
With this condition, the pipeline will execute the job only when the Commitcomment is Merge pull request, this can filter out modifications not done by PR.
To get the value of the variable Commitcomment, we could to check the commits message on our github by the variable Build.SourceVersionMessage:
If the commit comes from PR, it will given a default comment, starting with: Merge pull request xxx, we could add a bash\powershell script to get the first few fields.
Then use Logging Command to set the variable Commitcomment to true if the first few fields is Merge pull request:
- task: CmdLine#2
displayName: get the first few fields
inputs:
script: >-
echo $(Build.SourceVersionMessage)
set TempVar=$(Build.SourceVersionMessage)
set Commitcomment=%TempVar:~0,18%
echo %Commitcomment%
echo ##vso[task.setvariable variable=Commitcomment]%Commitcomment%
Reference link: Is there a short 7-digit version of $(SourceVersion) in Azure Devops?
Then add this variable as condition condition: and(succeeded(), eq(variables['Commitcomment'], 'Merge pull request')) for your task to verify the PR comments:
- task: CmdLine#2
displayName: script to validate the PR comments
condition: and(succeeded(), eq(variables['Commitcomment'], 'Merge pull request'))
inputs:
script: >
echo To validate the PR comments
In this case, if the commit not comes from PR, it will skip the PR comments verify task:
If you just want to launch a build when the merge is done (pull request validated) in a specific branch, your code is good.
If you want to run a validation build currently it is not integrated into the Yaml pippeline configuration (https://learn.microsoft.com/en-us/azure/devops/pipelines/yaml-schema?view=azure-devops&tabs=schema%2Cparameter-schema#pr-trigger)
To do this, it must be done via the graphical interface:
Project Settings -> Repositories -> Select your repo -> Policies -> Branch Policies -> Select your branch -> Build Validation -> + -> add build information
(https://learn.microsoft.com/en-us/azure/devops/repos/git/branch-policies?view=azure-devops#build-validation)