Sonarqube Gitlab integration issue with sonar-scanner.properties file - gitlab

I have a two projects in GitLab and I am trying to integrate SonarQube with my GitLab projects.
Project 1
I have added the 'sonar-scanner.properties' file to Project1 and it's as follows:
sonar-scanner.properties
# SonarQube server
# sonar.host.url & sonar.login are set by the Scanner CLI.
# See https://docs.sonarqube.org/latest/analysis/gitlab-cicd/.
# Project settings.
sonar.projectKey=Trojanwall
sonar.projectName=Trojanwall
sonar.projectDescription=My new interesting project.
sonar.links.ci=https://gitlab.com/rmesi/trojanwallg2-testing/-/pipelines
#sonar.links.issue=https://gitlab.com/rmesi/trojanwallg2-testing/
# Scan settings.
sonar.projectBaseDir=./
#sonar.sources=./
sonar.sources=./
sonar.sourceEncoding=UTF-8
sonar.host.url=http://sonarqube.southeastasia.cloudapp.azure.com:31000
sonar.login=4f4cbabd17914579beb605c3352349229b4fd57b
#sonar.exclusions=,**/coverage/**
# Fail CI pipeline if Sonar fails.
sonar.qualitygate.wait=true
Then I added the sonar scanner job in the gitlab-ci.yml file:
gitlab-ci.yml
sonar-scanner-trojanwall:
stage: sonarqube:scan
image:
name: sonarsource/sonar-scanner-cli:4.5
entrypoint: [""]
variables:
# Defines the location of the analysis task cache
SONAR_USER_HOME: "${CI_PROJECT_DIR}/.sonar"
# Shallow cloning needs to be disabled.
# See https://docs.sonarqube.org/latest/analysis/gitlab-cicd/.
GIT_DEPTH: 0
cache:
key: "${CI_JOB_NAME}"
paths:
- .sonar/cache
script:
- sonar-scanner
only:
- Production
- /^[\d]+\.[\d]+\.1$/
when: on_success
After this, I configured the two variables: 'SONAR_HOST_URL' and 'SONAR_TOKEN' and then, ran the pipeline. It worked perfectly fine for the Project 1.
Project 2
Then, I needed to do the same for the Project 2 as well. I needed the sonar scanner to go into the Project 2, scan and analyze. For that, I created another project in SonarQube with a new token.
I needed to configure in such a way that when the pipeline for Project 1 is triggered, it scans both Project 1 and 2.
For that, I added another job in Project1's pipeline.
It's as follows:
gitlab-ci.yml
sonar-scanner-test-repo:
stage: sonarqube:scan
trigger:
include:
- project: 'rmesi/test-repo'
ref: master
file: 'sonarscanner.gitlab-ci.yml'
only:
- Production
- /^[\d]+\.[\d]+\.1$/
when: on_success
I tried to setup a downstream pipeline to trigger a yaml file in Project 2. So, when The pipeline in Project 1 is triggered and when the job 'sonar-scanner-test-repo' gets triggered, another yaml file in Project 2 is run as a down stream pipeline. That YAML file is as follows:
sonarscanner.gitlab-ci.yml
stages:
- sonarqube:scan
variables:
CI_PROJECT_DIR: /builds/rmesi/test-repo
sonar-scanner:
stage: sonarqube:scan
image:
name: sonarsource/sonar-scanner-cli:4.5
entrypoint: [""]
variables:
# Defines the location of the analysis task cache
SONAR_USER_HOME: "${CI_PROJECT_DIR}/.sonar"
# Shallow cloning needs to be disabled.
# See https://docs.sonarqube.org/latest/analysis/gitlab-cicd/.
GIT_DEPTH: 0
cache:
key: "${CI_JOB_NAME}"
paths:
- .sonar/cache
script:
- cd /builds/rmesi/
- git clone https://gitlab.com/rmesi/test-repo.git test-repo
- sonar-scanner
Then I added the 'sonar-project.properties' file in Project2 which is as follows:
sonar-project.properties
# SonarQube server
# sonar.host.url & sonar.login are set by the Scanner CLI.
# See https://docs.sonarqube.org/latest/analysis/gitlab-cicd/.
# Project settings.
sonar.projectKey=test-repo
sonar.projectName=test-repo
sonar.projectDescription=My new interesting project.
sonar.links.ci=https://gitlab.com/rmesi/test-repo/-/pipelines
#sonar.links.issue=https://gitlab.com/rmesi/test-repo/
# Scan settings.
sonar.projectBaseDir=/builds/rmesi/test-repo/
sonar.sources=/builds/rmesi/test-repo/, ./
sonar.sourceEncoding=UTF-8
sonar.host.url=http://sonarqube.southeastasia.cloudapp.azure.com:31000
sonar.login=b0c40e44fd59155d27ee43ae375b9ad7bf39bbdb
#sonar.exclusions=,**/coverage/**
# Fail CI pipeline if Sonar fails.
sonar.qualitygate.wait=true
The issue is that, when the down stream pipeline is run, I am getting the following error message:
I figured out that the down stream pipeline is not locating the 'sonar-scanner.properties' in Project 2. (Lines 68 and 74)
Where as, on Project 1 while searching for this step, it shows:
INFO: Project root configuration file: /builds/rmesi/trojanwallg2-testing/sonar-project.properties
But in Project 2 it's not working.
Does anyone know how to fix this?

I found the solution to this, myself.
Required to add
"- cd /build/rmesi/test-repo ; sonar-scanner"
in the script section in the job of the 'sonarscanner.gitlab-ci.yml' file.
That way, the runner maps directly to desired directory and execute the 'sonar-scanner' command there.

Related

Is there a way to download file(output by ci job) from browser in gitlab ci?

I can run script for build my project in gitlab-ci.yaml config.Is there a way let pipline download the file output by build script from browser, then i can find it in my computer?
You could create a job artifact from the output of commands from your pipeline (like your build script), assuming you have redirected said output in a file.
You can then download job artifacts by using the GitLab UI or the API.
Using the GitLab CLI glab:
glab ci artifact <refName> <jobName> [flags]
# example
glab ci artifact main build
This is my ci config, to download build-files on Gitlab UI via artifact field.
stages:
- build
.build-base:
stage: build
script:
- cd dev/${APP_PATH}
- yarn ${BUILD_SCRIPT}
when: manual
after_script:
- mv dev/${APP_PATH}/dist ./${APP_OUTPUT_NAME}
artifacts:
paths:
- ./${APP_OUTPUT_NAME}
expire_in: 2 week
name: ${APP_OUTPUT_NAME}_${CI_JOB_ID}
tags:
- kube-runner
order_workbench_sit_build:
extends: .build-base
variables:
APP_PATH: 'order-management'
BUILD_SCRIPT: 'build:sit'
APP_OUTPUT_NAME: 'order_workbench_sit'
order_workbench_build:
extends: .build-base
variables:
APP_PATH: 'order-management'
BUILD_SCRIPT: 'build'
APP_OUTPUT_NAME: 'order_workbench'

How to speed up Gitlab CI job with cache and artifacts

I wish Gitlab test job for my project on Rust run faster.
Locally it rebuilds pretty fast, but in gitlab job, avery build runs slow as a first one.
Looking for a way to use artifacts or cache from previous pipline to speed up rust test and build processe.
# .gitlab-ci.yml
stages:
- test
test:
stage: test
image: rust:latest
script:
- cargo test
Gitlab CI/CD supports caching between CI jobs using the cache key in your .gitlab-ci.yml. It is only able to cache files in the project directory so you need to use the environment variable CARGO_HOME if you also want to cache the cargo registry.
You can add a cache setting at the top level to setup a cache for all jobs that don't have a cache setting themselves and you can add it below a job definition to setup a cache configuration for this kind of job.
See the keyword reference for all possible configuration options.
Here is one example configuration that caches the cargo registry and the temporary build files and configures the clippy job to only use the cache but not write to it:
stages:
- test
cache: &global_cache # Default cache configuration with YAML variable
# `global_cache` pointing to this block
key: ${CI_COMMIT_REF_SLUG} # Share cache between all jobs on one branch/tag
paths: # Paths to cache
- .cargo/bin
- .cargo/registry/index
- .cargo/registry/cache
- target/debug/deps
- target/debug/build
policy: pull-push # All jobs not setup otherwise pull from
# and push to the cache
variables:
CARGO_HOME: ${CI_PROJECT_DIR}/.cargo # Move cargo data into the project
# directory so it can be cached
# ...
test:
stage: test
image: rust:latest
script:
- cargo test
# only for demonstration, you can remove this block if not needed
clippy:
stage: test
image: rust:latest
script:
- cargo clippy # ...
only:
- master
needs: []
cache:
<<: *global_cache # Inherit the cache configuration `&global_cache`
policy: pull # But only pull from the cache, don't push changes to it
If you want to use cargo publish from CI, you should then add .cargo to your .gitignore file. Otherwise cargo publish will show an error that there is an uncommitted directory .cargo in your project directory.

GitLab CI - Run pipeline when the contents of a file changes

I have a mono-repo with several projects (not my design choice).
Each project has a .gitlab-ci.yml setup to run a pipeline when a "version" file is changed. This is nice because a user can check-in to stage or master (for a hot-fix) and a build is created and deployed to a test environment.
The problem is when a user does a merge from master to stage and commits back to stage (to pull in any hot-fixes). This causes ALL the pipelines to run; even for projects that do not have actual content changes.
How do I allow the pipeline to run from master and/or stage but ONLY when the contents of the "version" file change? Like when a user changes the version number.
Here is an example of the .gitlab-ci.yml (I have 5 of these, 1 for each project in the mono-repo)
#
# BUILD-AND-TEST - initial build
#
my-project-build-and-test:
stage: build-and-test
script:
- cd $MY_PROJECT_DIR
- dotnet restore
- dotnet build
only:
changes:
- "MyProject/.gitlab-ci.VERSION.yml"
# no needs: here because this is the first step
#
# PUBLISH
#
my-project-publish:
stage: publish
script:
- cd $MY_PROJECT_DIR
- dotnet publish --output $MY_PROJECT_OUTPUT_PATH --configuration Release
only:
changes:
- "MyProject/.gitlab-ci.VERSION.yml"
needs:
- my-project-build-and-test
... and so on ...
I am still new to git, GitLab, and CI/pipelines. Any help would be appreciated! (I have little say in changing the mono-repo)
The following .gitlab-ci.yml will run the test_job only if the file version changes.
test_job:
script: echo hello world
rules:
- changes:
- version
See https://docs.gitlab.com/ee/ci/yaml/#ruleschanges
See also
Run jobs only/except for modifications on a path or file

Depoying a certain build with gitlab

My CI has two main steps. Build and deploy. The result of build is that an artifact is uploaded to maven nexus. And currently manual deploy step just takes the latest artifact from nexus and deploys it.
stages:
- build
- deploy
full:
stage: build
image: ubuntu
script:
- // Build and upload to nexus here
deploy:
stage: deploy
script:
- // Take latest artifact from nexus and deploy
when: manual
But to me this doesn't seem to make that much sense to always deploy latest build from every pipeline. I think ideally deploy step of each pipeline should deploy the artifact that was build by the same pipelines build task. Otherwise deploy step of each pipeline will do exactly the same thing regardless when it is started.
So I have two questions.
1) How can I make my deploy step to deploy the version that was build by this run?
2) If I still want to keep the "deploy latest" functionality, then does gitlab support adding a task separate of each pipeline because as I explained this step doesn't make a lot of seance to be in pipeline? I imagine it being in a separate specific place.
Not too familiar with maven and nexus, but assuming you can name the artifact before you push it, you can add one of the built-in environment variables that dictates which pipeline it's from.
ie:
...
Build:
stage: build
script:
- ./buildAsNormal.sh > build$CI_PIPELINE_ID.extension
- ./pushAsNormal.sh
Deploy:
stage: deploy
script:
- ./deployAsNormal #(but specify the build$CI_PIPELINE_ID.extension file)
There are a lot of CI env variables you can use that are extremely useful. The full list of them is here. The difference with $CI_PIPELINE_ID and $CI_JOB_ID is that the pipeline id is constant for all jobs in the pipeline, no matter when they execute. That means the pipeline id will be the same even if you run a manual step a week after the automated steps. The job id is specific to each job.
Regarding your comment, the usage of artifacts: can solve your problem.
You can put the version number in a file and get the file in the next stage :
stages:
- build
- deploy
full:
stage: build
image: ubuntu
script:
- echo "1.0.0" > version
- // Build and upload to nexus here
artifacts:
paths:
- version
expire_in: 1 week
deploy:
stage: deploy
script:
- VERSION = $(cat version)
- // Take the artifact from nexus using VERSION variable and deploy
when: manual
An alternative is to build, push to nexus and use artifact: to pass the result of the build to the Deploy job :
stages:
- build
- deploy
full:
stage: build
image: ubuntu
script:
- // Build and put the result in out/ directory
- // Upload the result from out/ to nexus
artifacts:
paths:
- out/
expire_in: 1 week
deploy:
stage: deploy
script:
- // Take the artifact from in out/ directory and deploy it
when: manual

Circleci runs on merged branch

I have a project on github and some issues were solved already and it has merged pullrequests.
I try to intregrate project with circleci by adding circleci config in root of the project (I created a new branch and pushed it) .circleci/config.yml:
version: 2
jobs:
build:
working_directory: ~/circleci
docker:
- image: circleci/openjdk:8-jdk
environment:
MAVEN_OPTS: -Xmx3200m
steps:
- checkout
- restore_cache:
keys:
- v1-dependencies-{{ checksum "pom.xml" }}
- v1-dependencies-
- run: mvn dependency:go-offline
- save_cache:
paths:
- ~/.m2
key: v1-dependencies-{{ checksum "pom.xml" }}
- run: mvn test
And I get error:
#!/bin/sh -eo pipefail
# No configuration was found in your project. Please refer to https://circleci.com/docs/2.0/ to get started with your configuration.
#
# -------
# Warning: This configuration was auto-generated to show you the message above.
# Don't rerun this job. Rerunning will have no effect.
false
Exited with code 1
It tries to run a job on a merged pullrequest.
How to make circlecie run builds from my new pullrequest in that I added circleci config?
p.s. I've tried to add circleci config into my main branch - it doesn't help.
Thanks!
CircleCI will look for a .circleci/config.yml in the branch that triggers a webhook from GitHub.
This means in a PR that the config must exist in the branch, and once merged, will be included in master.
When first added via the UI, CircleCI only looks at master, but subsequent pushes to any branch (as long as .circleci/config.yml is present in that branch) should work.
It looks like your working_directory is set incorrectly. Perhaps you mean ~/.circleci? By the way, people typically set the working directory to the root directory of the project, not the .circleci directory.

Resources