GitLab CI - Run pipeline when the contents of a file changes - gitlab

I have a mono-repo with several projects (not my design choice).
Each project has a .gitlab-ci.yml setup to run a pipeline when a "version" file is changed. This is nice because a user can check-in to stage or master (for a hot-fix) and a build is created and deployed to a test environment.
The problem is when a user does a merge from master to stage and commits back to stage (to pull in any hot-fixes). This causes ALL the pipelines to run; even for projects that do not have actual content changes.
How do I allow the pipeline to run from master and/or stage but ONLY when the contents of the "version" file change? Like when a user changes the version number.
Here is an example of the .gitlab-ci.yml (I have 5 of these, 1 for each project in the mono-repo)
#
# BUILD-AND-TEST - initial build
#
my-project-build-and-test:
stage: build-and-test
script:
- cd $MY_PROJECT_DIR
- dotnet restore
- dotnet build
only:
changes:
- "MyProject/.gitlab-ci.VERSION.yml"
# no needs: here because this is the first step
#
# PUBLISH
#
my-project-publish:
stage: publish
script:
- cd $MY_PROJECT_DIR
- dotnet publish --output $MY_PROJECT_OUTPUT_PATH --configuration Release
only:
changes:
- "MyProject/.gitlab-ci.VERSION.yml"
needs:
- my-project-build-and-test
... and so on ...
I am still new to git, GitLab, and CI/pipelines. Any help would be appreciated! (I have little say in changing the mono-repo)

The following .gitlab-ci.yml will run the test_job only if the file version changes.
test_job:
script: echo hello world
rules:
- changes:
- version
See https://docs.gitlab.com/ee/ci/yaml/#ruleschanges
See also
Run jobs only/except for modifications on a path or file

Related

How do I set up PR validations in Azure DevOps/GitHub?

We are migrating from Azure DevOps to GitHub and we have Build Validations set up where if you make a change in a specific folder, the respective CI pipeline will run when a PR is created.
I am trying to make use of the PR triggers in my YAML file, however when I open a PR it doesn't seem to work.
My pipeline is:
trigger: none
pr:
branches:
include:
- develop
- release/*
- ProductionSupport/*
paths:
include:
- cicd/pipelines/common/pre-commit-ci.yaml
- src
- cicd
pool:
vmImage: ubuntu-latest
variables:
PRE_COMMIT_HOME: $(Pipeline.Workspace)/pre-commit-cache
steps:
- bash: echo "##vso[task.setvariable variable=PY]`python -V`"
displayName: Get python version
- task: Cache#2
inputs:
key: pre-commit | .pre-commit-config.yaml | "$(PY)"
path: $(PRE_COMMIT_HOME)
- bash: |
pip install --quiet pre-commit
pre-commit run
displayName: 'Run pre-commit'
As a test to make sure my branches/paths were correct I updated the triggers section to:
trigger:
branches:
include:
- develop
- release/*
- ProductionSupport/*
paths:
include:
- cicd/pipelines/common/pre-commit-ci.yaml
- src
- cicd
Then when I made a change in one of the files in these folders, the pipeline was successfully triggered. Am I specifying my PR validation incorrectly?
Your yml definition seems correct.
Since you mentioned the CI trigger work fine and you mentioned We are migrating from Azure DevOps to GitHub.
This brings me a idea that a situation that exactly reproduces what you're experiencing and you might not expect:
PR Trigger Override
For example, if your pipeline is the same one as before(Just change the pipeline source), and you didn't delete the previous build validation(Or previous pipeline name is same as the current one), then the pr part in your github yml file will be override, only the build validation on DevOps side will work.
I suggest you investigate whether you have some build validation settings to the pipeline(If your project structure is complex, this maybe difficult to find) or you can simply create a totally new pipeline with the new YAML file.

Sonarqube Gitlab integration issue with sonar-scanner.properties file

I have a two projects in GitLab and I am trying to integrate SonarQube with my GitLab projects.
Project 1
I have added the 'sonar-scanner.properties' file to Project1 and it's as follows:
sonar-scanner.properties
# SonarQube server
# sonar.host.url & sonar.login are set by the Scanner CLI.
# See https://docs.sonarqube.org/latest/analysis/gitlab-cicd/.
# Project settings.
sonar.projectKey=Trojanwall
sonar.projectName=Trojanwall
sonar.projectDescription=My new interesting project.
sonar.links.ci=https://gitlab.com/rmesi/trojanwallg2-testing/-/pipelines
#sonar.links.issue=https://gitlab.com/rmesi/trojanwallg2-testing/
# Scan settings.
sonar.projectBaseDir=./
#sonar.sources=./
sonar.sources=./
sonar.sourceEncoding=UTF-8
sonar.host.url=http://sonarqube.southeastasia.cloudapp.azure.com:31000
sonar.login=4f4cbabd17914579beb605c3352349229b4fd57b
#sonar.exclusions=,**/coverage/**
# Fail CI pipeline if Sonar fails.
sonar.qualitygate.wait=true
Then I added the sonar scanner job in the gitlab-ci.yml file:
gitlab-ci.yml
sonar-scanner-trojanwall:
stage: sonarqube:scan
image:
name: sonarsource/sonar-scanner-cli:4.5
entrypoint: [""]
variables:
# Defines the location of the analysis task cache
SONAR_USER_HOME: "${CI_PROJECT_DIR}/.sonar"
# Shallow cloning needs to be disabled.
# See https://docs.sonarqube.org/latest/analysis/gitlab-cicd/.
GIT_DEPTH: 0
cache:
key: "${CI_JOB_NAME}"
paths:
- .sonar/cache
script:
- sonar-scanner
only:
- Production
- /^[\d]+\.[\d]+\.1$/
when: on_success
After this, I configured the two variables: 'SONAR_HOST_URL' and 'SONAR_TOKEN' and then, ran the pipeline. It worked perfectly fine for the Project 1.
Project 2
Then, I needed to do the same for the Project 2 as well. I needed the sonar scanner to go into the Project 2, scan and analyze. For that, I created another project in SonarQube with a new token.
I needed to configure in such a way that when the pipeline for Project 1 is triggered, it scans both Project 1 and 2.
For that, I added another job in Project1's pipeline.
It's as follows:
gitlab-ci.yml
sonar-scanner-test-repo:
stage: sonarqube:scan
trigger:
include:
- project: 'rmesi/test-repo'
ref: master
file: 'sonarscanner.gitlab-ci.yml'
only:
- Production
- /^[\d]+\.[\d]+\.1$/
when: on_success
I tried to setup a downstream pipeline to trigger a yaml file in Project 2. So, when The pipeline in Project 1 is triggered and when the job 'sonar-scanner-test-repo' gets triggered, another yaml file in Project 2 is run as a down stream pipeline. That YAML file is as follows:
sonarscanner.gitlab-ci.yml
stages:
- sonarqube:scan
variables:
CI_PROJECT_DIR: /builds/rmesi/test-repo
sonar-scanner:
stage: sonarqube:scan
image:
name: sonarsource/sonar-scanner-cli:4.5
entrypoint: [""]
variables:
# Defines the location of the analysis task cache
SONAR_USER_HOME: "${CI_PROJECT_DIR}/.sonar"
# Shallow cloning needs to be disabled.
# See https://docs.sonarqube.org/latest/analysis/gitlab-cicd/.
GIT_DEPTH: 0
cache:
key: "${CI_JOB_NAME}"
paths:
- .sonar/cache
script:
- cd /builds/rmesi/
- git clone https://gitlab.com/rmesi/test-repo.git test-repo
- sonar-scanner
Then I added the 'sonar-project.properties' file in Project2 which is as follows:
sonar-project.properties
# SonarQube server
# sonar.host.url & sonar.login are set by the Scanner CLI.
# See https://docs.sonarqube.org/latest/analysis/gitlab-cicd/.
# Project settings.
sonar.projectKey=test-repo
sonar.projectName=test-repo
sonar.projectDescription=My new interesting project.
sonar.links.ci=https://gitlab.com/rmesi/test-repo/-/pipelines
#sonar.links.issue=https://gitlab.com/rmesi/test-repo/
# Scan settings.
sonar.projectBaseDir=/builds/rmesi/test-repo/
sonar.sources=/builds/rmesi/test-repo/, ./
sonar.sourceEncoding=UTF-8
sonar.host.url=http://sonarqube.southeastasia.cloudapp.azure.com:31000
sonar.login=b0c40e44fd59155d27ee43ae375b9ad7bf39bbdb
#sonar.exclusions=,**/coverage/**
# Fail CI pipeline if Sonar fails.
sonar.qualitygate.wait=true
The issue is that, when the down stream pipeline is run, I am getting the following error message:
I figured out that the down stream pipeline is not locating the 'sonar-scanner.properties' in Project 2. (Lines 68 and 74)
Where as, on Project 1 while searching for this step, it shows:
INFO: Project root configuration file: /builds/rmesi/trojanwallg2-testing/sonar-project.properties
But in Project 2 it's not working.
Does anyone know how to fix this?
I found the solution to this, myself.
Required to add
"- cd /build/rmesi/test-repo ; sonar-scanner"
in the script section in the job of the 'sonarscanner.gitlab-ci.yml' file.
That way, the runner maps directly to desired directory and execute the 'sonar-scanner' command there.

Gitlab CI: build for CI and Merge request but publish only CI to pages

I have a .gitlab-ci.yml file which I want to use to run a script for merge request validation. The same script should be used in CI, but only there the result should be published to gitlab pages. Also, only for the CI, the result should be cached.
This is a simplified version of the current .gitlab-ci.yml:
pages:
stage: deploy
script:
- mkdir public/
- touch public/file.txt
artifacts:
paths:
- public
only:
- master
cache:
paths:
- fdroid
(The real-world code is in the fdroid-firefox gitlab repo.)
There are 2 ways how the pipeline is being triggered. Depending on this, I do or do not want to publish to pages:
by merge request validation. In this case, I want to execute the script part, but I don't want to publish or cache the result (otherwise, anyone with permissions to create a merge request could overwrite the gitlab pages content).
by CI (which is triggered both after check-in to master branch and following a schedule). In this case, I want the result to be cached and the gitlab pages to be updated.
I already tried splitting up the stages:
stages:
- build
- deploy
build_repo:
stage: build
script:
- mkdir public/
- touch public/file.txt
pages:
stage: deploy
script: echo "publish to Gitlab pages"
artifacts:
paths:
- public
only:
- master
cache:
paths:
- fdroid
(Original .gitlab-ci.yml file)
But by doing this, the pages:deploy stage faled because it does not have access to the result of the build stage. The pages:deploy stage shows an error symbol and on the tooltip it says missing pages artifacts. (real world log).
The log says:
Uploading artifacts for successful job
00:01
Uploading artifacts...
WARNING: public: no matching files
ERROR: No files to upload
What am I doing wrong that I don't have access to the result of the build stage?
How can I run the script section in both cases but still deploy to pages only from master branch?
You don't save your public path artifacts in your build job. And that's why they are missing at next deploy stage pages job.
You have this:
build_repo:
stage: build
script:
- your script
Try to save artifacts in your build job like this:
build_repo:
stage: build
script:
- your script
artifacts:
when: always
paths:
- public
So they will be passed to the next stage deploy and pages job could see them.

Depoying a certain build with gitlab

My CI has two main steps. Build and deploy. The result of build is that an artifact is uploaded to maven nexus. And currently manual deploy step just takes the latest artifact from nexus and deploys it.
stages:
- build
- deploy
full:
stage: build
image: ubuntu
script:
- // Build and upload to nexus here
deploy:
stage: deploy
script:
- // Take latest artifact from nexus and deploy
when: manual
But to me this doesn't seem to make that much sense to always deploy latest build from every pipeline. I think ideally deploy step of each pipeline should deploy the artifact that was build by the same pipelines build task. Otherwise deploy step of each pipeline will do exactly the same thing regardless when it is started.
So I have two questions.
1) How can I make my deploy step to deploy the version that was build by this run?
2) If I still want to keep the "deploy latest" functionality, then does gitlab support adding a task separate of each pipeline because as I explained this step doesn't make a lot of seance to be in pipeline? I imagine it being in a separate specific place.
Not too familiar with maven and nexus, but assuming you can name the artifact before you push it, you can add one of the built-in environment variables that dictates which pipeline it's from.
ie:
...
Build:
stage: build
script:
- ./buildAsNormal.sh > build$CI_PIPELINE_ID.extension
- ./pushAsNormal.sh
Deploy:
stage: deploy
script:
- ./deployAsNormal #(but specify the build$CI_PIPELINE_ID.extension file)
There are a lot of CI env variables you can use that are extremely useful. The full list of them is here. The difference with $CI_PIPELINE_ID and $CI_JOB_ID is that the pipeline id is constant for all jobs in the pipeline, no matter when they execute. That means the pipeline id will be the same even if you run a manual step a week after the automated steps. The job id is specific to each job.
Regarding your comment, the usage of artifacts: can solve your problem.
You can put the version number in a file and get the file in the next stage :
stages:
- build
- deploy
full:
stage: build
image: ubuntu
script:
- echo "1.0.0" > version
- // Build and upload to nexus here
artifacts:
paths:
- version
expire_in: 1 week
deploy:
stage: deploy
script:
- VERSION = $(cat version)
- // Take the artifact from nexus using VERSION variable and deploy
when: manual
An alternative is to build, push to nexus and use artifact: to pass the result of the build to the Deploy job :
stages:
- build
- deploy
full:
stage: build
image: ubuntu
script:
- // Build and put the result in out/ directory
- // Upload the result from out/ to nexus
artifacts:
paths:
- out/
expire_in: 1 week
deploy:
stage: deploy
script:
- // Take the artifact from in out/ directory and deploy it
when: manual

How to control a stage play based on previous stage result without using artifacts?

We have a project hosted on an internal Gitlab installation.
The Pipeline of the project has 3 stages:
Build
Tests
Deploy
The objective is to hide or disable the Deploy stage when Tests fails
The problem is that we can't use artifacts because they are lost each time our machines reboot.
My question: Is there an alternative solution to artifacts to achieve this task?
The used .gitlab-ci.yml looks like this:
stages:
- build
- tests
- deploy
build_job:
stage: build
tags:
# - ....
before_script:
# - ....
script:
# - ....
when: manual
only:
- develop
- master
all_tests:
stage: tests
tags:
# - ....
before_script:
# - ....
script:
# - ....
when: manual
only:
- develop
- master
prod:
stage: deploy
tags:
# - ....
script:
# - ....
when: manual
environment: prod
I think you might have misunderstood the purpose of the built-in CI. The goal is to have building and testing all automated on each commit or at least every push. Having all tasks set to manual execution gives you almost no advantage over external CI tools like Jenkins or Bamboo. Your only advantage to local execution of the targets right now is having visibility in a central place.
That said there is no way to conditionally show or hide CI tasks, because it's against the basic idea. If you insist on your idea, you could look up the artifacts of the previous stages and abort the manual execution in case something is wrong.
The problem is that we can't use artifacts because they are lost each time our machines reboot
AFAIK artifacts are uploaded to the master and not saved on the runners. You should be fine having your artifacts passed from stage to stage.
By the way, the default for when is on_success which means to execute build only when all builds from prior stages succeed.

Resources