Gitlab - How to add badge based on jobs pipeline - gitlab

My goal is to show badges (ex : ) based on pipeline results.
I have a private gitlab ce omnibus instance with the following .gitlab-ci.yml :
image: python:3.6
stages:
- lint
- test
before_script:
- python -V
- pip install pipenv
- pipenv install --dev
lint:
stage: lint
script:
- pipenv run pylint --output-format=text --load-plugins pylint_django project/ | tee pylint.txt
- score=$(sed -n 's/^Your code has been rated at \([-0-9.]*\)\/.*/\1/p' pylint.txt)
- echo "Pylint score was $score"
- ls
- pwd
- pipenv run anybadge --value=$score --file=pylint.svg pylint
artifacts:
paths:
- pylint.svg
test:
stage: test
script:
- pipenv run python manage.py test
So I thought that I would store the image in the artifacts of the lint job and display it via the badge feature.
But I encounter the following issue : when I browse https://example.com/[group]/[project]/-/jobs/[ID]/artifacts/file/pylint.svg, instead of seeing the badge I have the following message :
The image could not be displayed because it is stored as a job artifact. You can download it instead.
And anyways, I feel like this is the wrong way, because even if I could get the image, there don't seems to be a way to get the image from the last job since gitlab URL for badges images only supports %{project_path}, %{project_id}, %{default_branch}, %{commit_sha}
So how one would add badge to a gitlab project based on a svg generated from results in a gitlab pipeline ?
My guess is that I could push to a .badge folder but that doesn't sound like a clean solution.

You can indeed get the artifact(s) for the latest job (see documentation here), but the trick is that you need to use a slightly different URL:
https://example.com/[group]/[project]/-/jobs/artifacts/[ref]/raw/pylint.svg?job=lint
where [ref] is the reference to your branch/commit/tag.
Speaking of badge placeholders available in Gitlab, you can potentially put %{default_branch} or %{commit_sha} into [ref]. This won't allow you to get the correct badge for every branch, but at least your default branch will get one.
Please also note that ?job=lint query parameter is required, without it the URL won't work.

Related

Gitlab CI CD variable are not getting injected while running gitlab pipeline

I am running the below code section in gitlab-ci.yml file:
script:
- pip install --upgrade pip
- cd ./TestAutomation
- pip install -r ./requirements.txt
Below are the keys and values. So I have to pass any values to the pipeline with key as a variable
ENV : dev
I have added all the above three variables in the GitLab CI CD variables sections by expanding them. just added a single value along with key
I also found like we can add variables in the .yml file itself as below. I am not sure how we can add multiple values for one key
variables:
TEST:
value: "some value" # this would be the default value
description: "This variable makes cakes delicious"
When I run the pipeline I am getting errors as looks like these variables and values are not injected properly.
More details:
And the same error I am getting while running the pipeline. Hence my suspect is like Category variable is not injected properly when I am running through the pipeline
If needed I will show it on the share screen
please find attached an image snippet of my gitlab-ci.yml file- [![enter image description here][1]][1]
I am passing the below parameter while running pipeline -
[![enter image description here][2]][2]
What I have observed is --the values associated with keys which I am passing as parameter or variables , those are not injected or replaced instead of key. So ideally ${Category} should be replaced with value smoke etc
Variables set in the GitLab UI are not passed down to service containers. To set them, assign them to variables in the UI, then re-assign them in your .gitlab-ci.yml:
stages:
- Test
# Added this to your yml file
variables:
ENV: $ENV
BROWSER: $BROWSER
Category: $Category
ui_tests:
stage: Test
image:
name: joyzourky/python-chromedriver:3.8
entrypoint: [""]
tags:
- micro
only:
- develop
when: manual
script:
- pip install --upgrade pip
- cd ./src/Tests/UIAutomation
- pip install -r ./requirements.txt
- pytest -s -v --env=${ENV} --browser=${BROWSER} --alluredir=./reports ./tests -m ${Category}
artifacts:
when: always
path:
- ./src/Tests/UIAutomation/reports/
- ./src/Tests/UIAutomation/logs/
expire_in: 1 day
Please refer attachment it's working with any issue.
When Gitlab CI CD variables are not getting injected into your pipelines as environment variables, please follow the following steps to verify.
Check whether the variable is defined. You need to have at least the Maintainer role setup for your user. Go to Settings --> CI/CD --> Variables. You can see all project variables, and group variables (inherited).
Next, check whether these variables are defined as Protected variables. If they are marked as Protected, then they are only exposed to protected branches or protected tags. I would suggest to uncheck this, if your current branch is not a protected branch. If not you can always make your current branch a protected one.
Next, check whether your code is accessing the environment variables correctly. Based on your scripting language, just access as if you are accessing a regular environment variable.
You don't really need to define these variables in the .gitlab-ci.yaml file. (Even though their documentation says so)
Hope this helps.
As #Keet Sugathadasa mentioned, the branch that triggers the CI must be protected; this was my case so I have to protect it by going to Settings > Repository > Protected branch and then protect the branch from there

Changing Gitlab SAST json report names

Issue
Note: My CI contains a code complexity checker which can be ignored. This question is mainly focused on SAST.
I have recently setup a SAST pipeline for one of my Gitlab projects. The Gitlab-ce and Gitlab-runner instances are self-hosted. When the SAST scan is completed, the downloaded artifacts / json reports all contain the same name gl-sast-report.json. In this example, the artifacts bandit-sast and semgrep-sast both product gl-sast-report.json when downloaded.
SAST configuration
stages:
- CodeScan
- CodeComplexity
sast:
stage: CodeScan
tags:
- sast
code_quality:
stage: CodeComplexity
artifacts:
paths: [gl-code-quality-report.json]
services:
tags:
- cq-sans-dind
include:
- template: Security/SAST.gitlab-ci.yml
- template: Code-Quality.gitlab-ci.yml
Completed SAST results
End Goal
If possible, how could I change the name of the artifacts for bandit-sast and semgrep-sast?
If question one is possible, does this mean I have to manually specify each analyser for various projects. Currently, based on my .gitlab-ci.yml the SAST analysers are automatically detected based on the project language.
If you're using the pre-built SAST images, this isn't possible, even if you run the docker command manually like so:
docker run --volume "$PWD":/code --env=LM_REPORT_VERSION="2.1" --env=CI_PROJECT_DIR=/code registry.gitlab.com/gitlab-org/security-products/analyzers/license-finder:latest
When using these SAST (and DAST) images, the report file will always have the name in the docs, however if you ran the docker command manually like above, you could rename the file before it's uploaded as an artifact, but it would still have the same json structure/content.
Run License Scanning Analyzer:
stage: sast
script:
- docker run --volume "$PWD":/code --env=LM_REPORT_VERSION="2.1" --env=CI_PROJECT_DIR=/code registry.gitlab.com/gitlab-org/security-products/analyzers/license-finder:latest
- mv gl-license-scanning-report.json license-scanning-report.json
artifacts:
reports:
license_scanning: license-scanning-report.json
The only way to change the json structure/content is to implement the SAST tests manually without using the provided images at all. You can see all the available SAST analyzers in this Gitlab repo.
For the License Finder analyzer as an example, the Dockerfile says the entrypoint for the image is the run.sh script.
You can see on line 20 of run.sh it sets the name of the file to 'gl-license-scanning-report.json', but we can change the name by running the docker image manually so this doesn't really help. However, we can see that the actual analyzing comes from the scan_project function, which you could replicate.
So while it is possible to manually run these analyzers without the pre-built images, it will be much more difficult to get them to work.

how to fix: This GitLab CI configuration is invalid: jobs:deploy_production script can't be blank

I'm trying GitLab for my first example.
I can't see where's is the error here:
this is for windows running firebase, vue.js, node.js on gitlab
image: node:alpine
cache:
paths:
- node_modules/
deploy_production:
stage: deploy
environment: Production
only:
- master
script:
- npm install
- npm i -g firebase tools
- npm run build
- firebase deploy --non-interactive --token "1/CYHKW-CuYsKOcy2Eo6_oC9akwGjyqtmtRZok93xb5VY"
This GitLab CI configuration is invalid: jobs:deploy_production script
can't be blank
You specify a stage in your deploy_production job but you don't define stages.
Add :
stages:
- deploy
before your job definition.
Late to the party, but one problem here is the indenting of the script tag, which needs to be under the job deploy_production. script isn't allowed at the top level like you've shown it here.
The error is kind of confusing, but does indicate the situation. Because script isn't at the right indent level, it's not part of the job, and a job requires a script.
Should be:
deploy_production:
stage: deploy
environment: Production
only:
- master
script:
- npm install
Another issue is you should screen out your token in the post!
Another way you can get this error message:
Here's what I was trying to do in gitlab-ci.yml:
default:
cache:
paths:
- .gradle
And I was getting this error message:
jobs:default script can't be blank
I was using the documentation here: https://docs.gitlab.com/ee/ci/yaml/
Which clearly indicates how to use default. The message implies gitlab thought default was a job.
ANSWER
You probably know where this is going -- the version I was using was about 3 years behind latest, and the "default" keyword had been added since then.
Check the version of gitlab you're using by going to the Help page (gitlab.domain.com/help), and it's listed at the top of the page.
To find the right documentation, I went to https://gitlab.com/rluna-gitlab/gitlab-ce, then chose my version from the branch drop down. From there went to the docs folder, then clicked on this link in the Popular Documentation table in the README.
https://gitlab.com/rluna-gitlab/gitlab-ce/-/blob/11-6-stable/doc/ci/yaml/README.md

Gitlab CI not invoking the 'pages' job

I have a project hosted on Gitlab. The project website is inside the pages branch and is a jekyll based site.
My .gitlab-ci.yml looks like
pages:
script:
- gem install jekyll
- jekyll build -d public/
artifacts:
paths:
- public
only:
- pages
image: node:latest
cache:
paths:
- node_modules/
before_script:
- npm install -g gulp-cli
- npm install
test:
script:
- gulp test
When I pushed this configuration file to master, the pipeline executed only the test job and not pages job. I thought maybe pushing to master didn't invoke this job because only specifies pages branch. Then I tried pushing to pages branch but to no avail.
How can I trigger the pages job?
You're right to assume that the only constraint makes the job run only on the ref's or branches specified in the only clause.
See https://docs.gitlab.com/ce/ci/yaml/README.html#only-and-except
It could be that there's a conflict because the branch and the job have the same name. Could you try renaming the job to something different just to test?
I'd try a couple of things.
First, I'd put in this stages snippet at the top of the YML:
stages:
- test
- pages
This explicitly tells the CI to run the pages stage after the test stage is successful.
If that doesn't work, then, I'd remove the only tag and see what happens.
Complementing #rex answer's:
You can do either:
pages:
script:
- gem install jekyll
- jekyll build -d public/
artifacts:
paths:
- public
Which will deploy your site regardless the branch name, or:
pages:
script:
- gem install jekyll
- jekyll build -d public/
artifacts:
paths:
- public
only:
- master # or whatever branch you want to deploy Pages from
Which will deploy Pages from master.
Pls let me know if this helps :)

Pylint badge in gitlab

Gitlab has functionality to generade badges about build status and coverage percentage.
Is it possible to create custom badge to display Pylint results?
Or just display this results in README.md?
I already have CI job for Pylint
I have written a python badge generation package that produces badges very visually similar to the main badge services. It is highly flexible, you can import and use in your python code, or run from the command line.
I use this in GitLab CI to display pylint and coverage scores.
There are other ways to do this using shields.io (see other answer from kubouch), but this approach can be used in situations where you may not have external internet access, such as in a corporate / enterprise setting where firewalls or proxies are blocking internet access.
GitLab CI Setup
1. Generate the badge
My CI pipeline has a step that runs pylint, and I used sed to extract the score from the output text. I then use anybadge (details below) to generate a pylint score badge, and save it as public/pylint.svg.
pylint:
stage: test
script:
- pylint --rcfile=.pylintrc --output-format=text <LIST-OF-FILES-TO-RUN-PYLINT-AGAINST> | tee pylint.txt
- score=$(sed -n 's/^Your code has been rated at \([-0-9.]*\)\/.*/\1/p' pylint.txt)
- echo "Pylint score was $score"
- anybadge --value=$score --file=public/pylint.svg pylint
If pylint generates a non-zero rc then GitLab will see that as a command error and the job will fail, meaning no badge is generated, and missing image will show where the badge is used.
NOTE: pylint WILL OFTEN generate non-zero return codes since it uses the exit code to communicate the status of the lint check. I suggest using something like pylint-exit to handle pylint return codes in CI pipelines.
2. Register badge as pipeline artifact
I register the generated badge file as an artifact in the CI job by including this in the .gitlab-ci.yml:
pylint:
...
- echo "Pylint score was $score"
- anybadge --value=$score --file=public/pylint.svg pylint
artifacts:
paths:
- public/pylint.svg
3. Publish badge to GitLab Pages
I include a pages publish step, which deploys everything in the public directory to GitLab pages:
pages:
stage: deploy
artifacts:
paths:
- public
only:
- master
4. Include badge in README.md
When the master pipeline runs for the project, the pylint.svg file is published to GitLab Pages, and I can then reference the image from my project README.md so that the latest pylint badge is displayed.
If you are using https://gitlab.com for your project then the URL for the svg artifact will usually be something like this (replace NAMESPACE with your username, or group name if your project is under a group - more details here):
https://NAMESPACE.gitlab.io/pyling.svg
In your README.md you can include an image with:
![pylint](https://NAMESPACE.gitlab.io/pyling.svg)
If you want to make the image into a link you can use:
[![pylint](https://NAMESPACE.gitlab.io/pyling.svg)](LINKTARGET)
Let me know if you need more information on any of the setup.
Anybadge Python Package
Here's some more info on the anybadge Python package:
You can set the badge label and value, and you can set the color based on thresholds. There are pre-built settings for pylint, coverage, and pipeline success, but you can create any badge you like.
Here is a link to the github project with more detailed documentation: https://github.com/jongracecox/anybadge
Install with pip install anybadge
Example python code:
import anybadge
# Define thresholds: <2=red, <4=orange <8=yellow <10=green
thresholds = {2: 'red',
4: 'orange',
6: 'yellow',
10: 'green'}
badge = anybadge.Badge('pylint', 2.22, thresholds=thresholds)
badge.write_badge('pylint.svg')
Example command line use:
anybadge --label pylint --value 2.22 --file pylint.svg 2=red 4=orange 8=yellow 10=green
Update 2019
Using GitLab Pages is no longer required
It is now possible to directly access to the latest articfact, which simplify the workaround.
Use a dedicated pylint artifact instead of public, and remove the unnecessary deploy step (or edit it if already used):
pylint:
stage: test
before_script:
- pip install pylint pylint-exit anybadge
script:
- mkdir ./pylint
- pylint --output-format=text . | tee ./pylint/pylint.log || pylint-exit $?
- PYLINT_SCORE=$(sed -n 's/^Your code has been rated at \([-0-9.]*\)\/.*/\1/p' ./pylint/pylint.log)
- anybadge --label=Pylint --file=pylint/pylint.svg --value=$PYLINT_SCORE 2=red 4=orange 8=yellow 10=green
- echo "Pylint score is $PYLINT_SCORE"
artifacts:
paths:
- ./pylint/
Note that here I copy the Pylint log file in the folder artifact, in this way it will be accessible without looking at the pipeline logs.
The badge image will then be available at https://gitlab.example.com/john-doe/foo/-/jobs/artifacts/master/raw/pylint/pylint.svg?job=pylint, and the Pylint log at https://gitlab.example.com/john-doe/foo/-/jobs/artifacts/master/raw/pylint/pylint.log?job=pylint.
2. You can use GitLab's builtin badges instead of images in README
GitLab can now include badges in a projet or group, that will be displayed in the project header.
Got to Settings / General / Badges, then create a new badge by setting its link and image link, as described above:
If you don't want to use the README, gitlab pages, anybadge or dropbox you can use https://img.shields.io/badge/lint%20score-$score-blue.svg to 'create' a badge (which is just an URL) and change the badge image URL via the gitlab API.
Details and the lint stage of my .gitlab-ci.yml
I developed a workaround solution to real-time per-job badges.
It is not Pylint specific but the approach is general and you can easily modify it into what you need.
This example repo (branch badges) creates a custom badge for each CI job. There is also a complete walkthrough so I won't copy-paste it here.
The core idea is (assuming you're now inside a running CI job):
Create a badge (e.g. fetch it from shields.io into a file).
Upload the badge file to some real-time storage from where it can be linked (e.g. Dropbox).
Dropbox supports calling its API via HTTP requests (see this).
Thus, all the above can be done using e.g. curl or Python requests - basic tools.
You just need to pass the Dropbox access token as secret variable and save the file under the same name to overwrite the old badge.
Then, you can then directly link the Dropbox badge wherever you need.
There are some tricks to it so be sure to check my example repo if you want to use it.
For me it works quite well and seems to be fast.
The advantage of this method is that you don't have to mess with GitLab Pages.
Instead of publishing on Pages you put it to Dropbox.
That is a simple file transfer called by HTTP request.
No more to that.
Tutorial
Add the below file - .gitlab-ci.yml to your GitLab repository:
pylint:
stage: test
image: python:3.7-slim
before_script:
- mkdir -p public/badges public/lint
- echo undefined > public/badges/$CI_JOB_NAME.score
- pip install pylint-gitlab
script:
- pylint --exit-zero --output-format=text $(find -type f -name "*.py" ! -path "**/.venv/**") | tee /tmp/pylint.txt
- sed -n 's/^Your code has been rated at \([-0-9.]*\)\/.*/\1/p' /tmp/pylint.txt > public/badges/$CI_JOB_NAME.score
- pylint --exit-zero --output-format=pylint_gitlab.GitlabCodeClimateReporter $(find -type f -name "*.py" ! -path "**/.venv/**") > codeclimate.json
- pylint --exit-zero --output-format=pylint_gitlab.GitlabPagesHtmlReporter $(find -type f -name "*.py" ! -path "**/.venv/**") > public/lint/index.html
after_script:
- anybadge --overwrite --label $CI_JOB_NAME --value=$(cat public/badges/$CI_JOB_NAME.score) --file=public/badges/$CI_JOB_NAME.svg 4=red 6=orange 8=yellow 10=green
- |
echo "Your score is: $(cat public/badges/$CI_JOB_NAME.score)"
artifacts:
paths:
- public
reports:
codequality: codeclimate.json
when: always
pages:
stage: deploy
image: alpine:latest
script:
- echo
artifacts:
paths:
- public
only:
refs:
- master
Substitute the below variables accordingly depending upon your project structure. For example, if your repository is located at company_intern/john/robot_code and you added the .gitlab-ci.yml file to your main branch, then:
$GROUP = company_intern
$SUBGROUP = john
$PROJECT_NAME = robot_code
$BRANCH = main
# Substitute the above variables in the badge
[![pylint](https://gitlab.com/$GROUP/$SUBGROUP/$PROJECT_NAME/-/jobs/artifacts/$BRANCH/raw/public/badges/pylint.svg?job=pylint)](https://gitlab.com/$GROUP/$SUBGROUP/$PROJECT_NAME/-/jobs/artifacts/$BRANCH/browse/public/lint?job=pylint)
Your badge has now been integrated! To verify the lining process locally before committing it directly:
# To lint all the python files in the directory:
pylint --exit-zero --output-format=text $(find -type f -name "*.py" ! -path "**/.venv/**")
# To lint a specific file, say foo.py:
pylint --exit-zero --output-format=text foo.py
References:
pylint-gitlab ยท PyPI
If you use flake8 to run pylint, then an easy way to generate a badge is to use genbadge. This simple commandline tool provides capabilities to generate badges for tests, coverage, and flake8.
Simply run
genbadge flake8 -i flake8stats.txt
to generate the badge from a flake8 statistics file, such as this one: . You can then use the badge to provide a quick link to the HTML report generated by flake8-html. See documentation for details (I'm the author by the way !).

Resources