Changing Gitlab SAST json report names - gitlab

Issue
Note: My CI contains a code complexity checker which can be ignored. This question is mainly focused on SAST.
I have recently setup a SAST pipeline for one of my Gitlab projects. The Gitlab-ce and Gitlab-runner instances are self-hosted. When the SAST scan is completed, the downloaded artifacts / json reports all contain the same name gl-sast-report.json. In this example, the artifacts bandit-sast and semgrep-sast both product gl-sast-report.json when downloaded.
SAST configuration
stages:
- CodeScan
- CodeComplexity
sast:
stage: CodeScan
tags:
- sast
code_quality:
stage: CodeComplexity
artifacts:
paths: [gl-code-quality-report.json]
services:
tags:
- cq-sans-dind
include:
- template: Security/SAST.gitlab-ci.yml
- template: Code-Quality.gitlab-ci.yml
Completed SAST results
End Goal
If possible, how could I change the name of the artifacts for bandit-sast and semgrep-sast?
If question one is possible, does this mean I have to manually specify each analyser for various projects. Currently, based on my .gitlab-ci.yml the SAST analysers are automatically detected based on the project language.

If you're using the pre-built SAST images, this isn't possible, even if you run the docker command manually like so:
docker run --volume "$PWD":/code --env=LM_REPORT_VERSION="2.1" --env=CI_PROJECT_DIR=/code registry.gitlab.com/gitlab-org/security-products/analyzers/license-finder:latest
When using these SAST (and DAST) images, the report file will always have the name in the docs, however if you ran the docker command manually like above, you could rename the file before it's uploaded as an artifact, but it would still have the same json structure/content.
Run License Scanning Analyzer:
stage: sast
script:
- docker run --volume "$PWD":/code --env=LM_REPORT_VERSION="2.1" --env=CI_PROJECT_DIR=/code registry.gitlab.com/gitlab-org/security-products/analyzers/license-finder:latest
- mv gl-license-scanning-report.json license-scanning-report.json
artifacts:
reports:
license_scanning: license-scanning-report.json
The only way to change the json structure/content is to implement the SAST tests manually without using the provided images at all. You can see all the available SAST analyzers in this Gitlab repo.
For the License Finder analyzer as an example, the Dockerfile says the entrypoint for the image is the run.sh script.
You can see on line 20 of run.sh it sets the name of the file to 'gl-license-scanning-report.json', but we can change the name by running the docker image manually so this doesn't really help. However, we can see that the actual analyzing comes from the scan_project function, which you could replicate.
So while it is possible to manually run these analyzers without the pre-built images, it will be much more difficult to get them to work.

Related

How to access artifacts in next stage in GitLab CI/CD

I am trying to build GitLab CI/CD for the first time. I have two stages build and deploy The job in the build stage produce artifacts. And then the job in deploy stage wants to upload those artifacts to AWS S3. Both the jobs are using same runner but different docker image.
default:
tags:
- dev-runner
stages:
- build
- deploy
build-job:
image: node:14
stage: build
script:
- npm install
- npm run build:prod
artifacts:
paths:
- deploy/build.zip
deploy-job:
image: docker.xx/xx/gitlab-templates/awscli
stage: deploy
script:
- aws s3 cp deploy/build.zip s3://mys3bucket
The build-job is successfully creating the artifacts. GitLab documentation says artifacts will be automatically downloaded and available in the next stage, however it does not specify where & how these artifacts will be available to consume in the next stage.
Question
In the deploy-job will the artifacts available at the same location? like deploy/build.zip
The artifacts should be available to the second job in the same location, where the first job saved them using the 'artifacts' directive.
I think this question already has an answer on the gitlab forum:
https://forum.gitlab.com/t/access-artifact-in-next-task-to-deploy/9295
Maybe you need to make sure the jobs run in the correct order using the dependencies directive, which is also mentioned in the forum discussion accesible via the link above.

why we use images in CI CD pipeline?

actually i'm facing a problem in this code :
Sorted
- Changes
- Lint
- Build
- Tests
- E2E
- SAST
- DAST
- Publish
- Deployment
# Get Runner Image
image: Node:latest
# Set Variables for mysql
**variables:**
MYSQL_ROOT_PASSWORD: secret
MYSQL_PASSWORD:
..
..
**script:**
- ./addons/scripts/ci/lintphp.sh
why we use image I asked some one said that we build on it like the docker file command FROM ubuntu:latest
and one other told me it's because it executes the code and I don't actually know the script tag above what evem does it mean to execute inside the image or on the runner?
GitLab Runner is an open source application that collects pipeline job payload and executes it. Therefore, it implements a number of executors that can be used to run your builds in different scenarios, if you are using a docker executor you need to specify what image you will be using to run your builds.

Gitlab docker ci template - what is the header name for

At first example, the image name is docker:latest.
And the stage is the defination of pipeline that i can have build, test, deploy stages.
Snippet 1
gitlab-ci.yml
docker-build:
# Use the official docker image.
image: docker:latest
stage: build
May i know the defination of docker-build?
Can i named it build or something else, what is the usage?
Snippet 2
gitlab-ci.yml
image: docker:latest
services:
- docker:dind
build:
stage: build
script:
- docker build -t test .
In another example, there is services defined. Why i need services and when i don't need it?
Can i say this example must have another file 'Dockerfile' so the docker build command only works?
Once build successfully , the image will be named docker:latest?
Job-naming:
There are a few reserved keywords which you can not use for a job name, like stages, services etc. see https://docs.gitlab.com/ee/ci/yaml/#unavailable-names-for-jobs
you can name your job anything else you like.
Stages
As you have written there are a certain set of pre defined stages: .pre, build, test, deploy and .post - but you can also define your own stages with
stages:
- build
- build-docker
- test
- deploy
Dockerfile
yes you need a dockerfile to docker build, and the tag of your image will be test as it is defined with -t test.
Regarding building docker images with gitlab ci i can recommen https://blog.callr.tech/building-docker-images-with-gitlab-ci-best-practices/ to read.
I hope this helps somehow. Generally speaking i recommend you to read the gitlab documentation and the getting started guide: https://docs.gitlab.com/ee/ci/quick_start/ - it explains a lot of the default concepts. and i would recommend to not ask to many questions within in one stackoverflow question, keep it focused to one topic

what is the meaning of a colon in a list value in a yaml file, specifically :image of the stage build:image in a .gitlab-ci.yml file

Is the list value build:image just a name like build_image? Or does it have special usage in either the yaml file or the .gitlab-ci.yml file? If there isn't a special usage, what is the value of using name1:name2 instead of name1_name2?
The :image doesn't seem to be put into a variable. When I run this through the gitlab pipeline, the output is
Skipping Git submodules setup
Restoring cache
Downloading artifacts
Running before_script and script
$ echo image is $image
image
is
.gitlab-ci.yml
stages:
- build:image
- tag:image
- deploy
build:
stage: build:image
script:
- echo image is $image
I don't see anything like this in the GitLab CI/CD Pipeline Configuration Reference
Where did you see this .gitlab-ci.yml file?
I ran the .gitlab-ci.yml you provided and it seems to work fine, apparently GitLab CI doesn't treat the colon in any special way -- and I wouldn't expect it to, as there is no mention of it in the documentation.

Gitlab - How to add badge based on jobs pipeline

My goal is to show badges (ex : ) based on pipeline results.
I have a private gitlab ce omnibus instance with the following .gitlab-ci.yml :
image: python:3.6
stages:
- lint
- test
before_script:
- python -V
- pip install pipenv
- pipenv install --dev
lint:
stage: lint
script:
- pipenv run pylint --output-format=text --load-plugins pylint_django project/ | tee pylint.txt
- score=$(sed -n 's/^Your code has been rated at \([-0-9.]*\)\/.*/\1/p' pylint.txt)
- echo "Pylint score was $score"
- ls
- pwd
- pipenv run anybadge --value=$score --file=pylint.svg pylint
artifacts:
paths:
- pylint.svg
test:
stage: test
script:
- pipenv run python manage.py test
So I thought that I would store the image in the artifacts of the lint job and display it via the badge feature.
But I encounter the following issue : when I browse https://example.com/[group]/[project]/-/jobs/[ID]/artifacts/file/pylint.svg, instead of seeing the badge I have the following message :
The image could not be displayed because it is stored as a job artifact. You can download it instead.
And anyways, I feel like this is the wrong way, because even if I could get the image, there don't seems to be a way to get the image from the last job since gitlab URL for badges images only supports %{project_path}, %{project_id}, %{default_branch}, %{commit_sha}
So how one would add badge to a gitlab project based on a svg generated from results in a gitlab pipeline ?
My guess is that I could push to a .badge folder but that doesn't sound like a clean solution.
You can indeed get the artifact(s) for the latest job (see documentation here), but the trick is that you need to use a slightly different URL:
https://example.com/[group]/[project]/-/jobs/artifacts/[ref]/raw/pylint.svg?job=lint
where [ref] is the reference to your branch/commit/tag.
Speaking of badge placeholders available in Gitlab, you can potentially put %{default_branch} or %{commit_sha} into [ref]. This won't allow you to get the correct badge for every branch, but at least your default branch will get one.
Please also note that ?job=lint query parameter is required, without it the URL won't work.

Resources