Dependency Scanning not triggering in GitLab CICD pipeline - gitlab

I am new to GitLab and was trying to build a sample CICD pipeline. Following is my code:
variables:
REPO_NAME: devsecopscollab/my_test_repo
IMAGE_TAG: demo-app-1.0
include:
- template: SAST.gitlab-ci.yml
- template: Dependency-Scanning.gitlab-ci.yml
stages:
- Test
- Build
- Deploy
job1_runTests:
stage: Test
image: python:3.10.8
before_script:
- apt-get update && apt-get install make
script:
- make test
sast:
stage: Test
artifacts:
name: sast
paths:
- gl-sast-report.json
reports:
sast: gl-sast-report.json
when: always
dependency_scanning:
stage: Test
variables:
CI_DEBUG_TRACE: "true"
artifacts:
name: dependency_scanning
paths:
- gl-dependency-scanning-report.json
reports:
dependency_scanning: gl-dependency-scanning-report.json
when: always
job2_buildImage:
stage: Build
image: docker:20.10.21
services:
- docker:20.10.21-dind
variables:
DOCKER_TLS_CERTDIR: "/certs"
before_script:
- docker login -u $DOCKERHUB_USER -p $DOCKERHUB_PASS
script:
- docker build -t $REPO_NAME:$IMAGE_TAG .
- docker push $REPO_NAME:$IMAGE_TAG
job3_deploy:
stage: Deploy
before_script:
- chmod 400 $SSH_KEY
script:
- ssh -o StrictHostKeyChecking=no -i $SSH_KEY ubuntu#$PUBLIC_IP "
docker login -u $DOCKERHUB_USER -p $DOCKERHUB_PASS &&
docker ps -aq | xargs --no-run-if-empty docker stop | xargs --no-run-if-empty docker rm &&
docker run -d -p 5000:5000 $REPO_NAME:$IMAGE_TAG"
But my pipeline looks like in this image here (no dependency scanning stage is shown):
Is something wrong with this pipeline? Why is dependency scanning stage not visible?
Tried the above given code snippet and was expecting a dependency scanning stage visible on the pipeline.

Dependency scanning works only in ultimate plan. cf: https://gitlab.com/gitlab-org/gitlab/-/issues/327366
https://i.stack.imgur.com/o7GE9.png
Also check this link https://docs.gitlab.com/ee/user/application_security/dependency_scanning/
to trigger the dependency scanning template is easer just use
include:
- template: SAST.gitlab-ci.yml
- template: Dependency-Scanning.gitlab-ci.yml

Related

This GitLab CI configuration is invalid: variable definition must be either a string or a hash

workflow:
rules:
- if: $CI_COMMIT_BRANCH != "main" && $CI_PIPELINE_SOURCE != "merge_request_event"
when: never
- when: always
variables:
IMAGE_NAME: $CI_REGISTRY_IMAGE
IMAGE_TAG: 1.1
DEV_SERVER_HOST: 3.239.229.167
stages:
test
build
deploy
run_unit_test:
image: node:17-alpine3.14
stage: test
tags:
- docker
- ec2
- remote
before_script:
- cd app
- npm install
script:
- npm test
artifacts:
when: always
paths:
- app/junit.xml
reports:
junit: app/junit.xml
build_image:
stage: build
tags:
- ec2
- shell
- remote
script:
- docker build -t $IMAGE_NAME:$IMAGE_TAG .
push_image:
stage: build
needs:
- build_image
tags:
- ec2
- shell
- remote
before_script:
docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY
script:
docker push $IMAGE_NAME:$IMAGE_TAG
deploy_to_dev:
stage: deploy
tags:
- ec2
- shell
- remote
script:
- ssh -o StrictHostKeyChecking=no -i $SSH_PRIVATE_KEY ubuntu#$DEV_SERVER_HOST "
docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY &&
docker run -d -p 3000:3000 $IMAGE_NAME:$IMAGE_TAG"
I dont know why im getting this error
while deploying to dev server gitlab yaml parser is throwing is error " This GitLab CI configuration is invalid: variable definition must be either a string or a hash"
variables:
IMAGE_NAME: $CI_REGISTRY_IMAGE
IMAGE_TAG: "1.1"
DEV_SERVER_HOST: "3.239.229.167"
try adding quotes to variable values
some references - https://gitlab.com/gitlab-org/gitlab-foss/-/issues/22648

Gitlab pipeline failing on "Cleaning up project directory and file based variables"

I have a Gitlab pipeline that is failing when it attempts docker build (using Kaniko)
I am yet to do a successful docker build BUT this particular error has started after I :
Changed the kaniko image from image: gcr.io/kaniko-project/executor:debug to gcr.io/kaniko-project/executor:51734fc3a33e04f113487853d118608ba6ff2b81
Added settings for pushing to insecure registries :
--insecure
--skip-tls-verify
--skip-tls-verify-pull
--insecure-pull
After this change part of the pipeline looks like this :
before_script:
- 'dotnet restore --packages $NUGET_PACKAGES_DIRECTORY'
build_job:
tags:
- xxxx
only:
- develop
stage: build
script:
- dotnet build --configuration Release --no-restore
publish_job:
tags:
- xxxx
only:
- develop
stage: publish
artifacts:
name: "$CI_COMMIT_SHA"
paths:
- ./$PUBLISH_DIR
script:
- dotnet publish ./src --configuration Release --output $(pwd)/$PUBLISH_DIR
docker_build_dev:
tags:
- xxxx
image:
name: gcr.io/kaniko-project/executor:51734fc3a33e04f113487853d118608ba6ff2b81
entrypoint: [""]
only:
- develop
stage: docker
before_script:
- echo "Docker build"
script:
- echo "${CI_PROJECT_DIR}"
- cp ./src/Dockerfile /builds/xxx/xxx/xxx/Dockerfile
- mkdir -p /kaniko/.docker
- echo "{\"auths\":{\"${CI_REGISTRY}\":{\"auth\":\"$(printf "%s:%s" "${CI_REGISTRY_USER}" "${CI_REGISTRY_PASSWORD}" | base64 | tr -d '\n')\"}}}" > /kaniko/.docker/config.json
- >-
/kaniko/executor
--context "${CI_PROJECT_DIR}"
--insecure
--skip-tls-verify
--skip-tls-verify-pull
--insecure-pull
--dockerfile "${CI_PROJECT_DIR}/Dockerfile"
--destination "${CI_REGISTRY_IMAGE}:${CI_COMMIT_TAG}"
Part of the output from the pipeline is as below:
[32;1mSkipping Git submodules setup[0;m
section_end:1652535765:get_sources
[0Ksection_start:1652535765:download_artifacts
[0K[0K[36;1mDownloading artifacts[0;m[0;m
[32;1mDownloading artifacts for publish_job (33475)...[0;m
Downloading artifacts from coordinator... ok [0;m id[0;m=33475 responseStatus[0;m=200 OK token[xxxxxxxxxxx
section_end:1652535769:download_artifacts
[0Ksection_start:1652535769:step_script
[0K[0K[36;1mExecuting "step_script" stage of the job script[0;m[0;m
section_end:1652535769:step_script
[0Ksection_start:1652535769:cleanup_file_variables
[0K[0K[36;1mCleaning up project directory and file based variables[0;m[0;m
section_end:1652539354:cleanup_file_variables
[0K[31;1mERROR: Job failed: execution took longer than 1h0m0s seconds
[0;m
What am I missing?
I was missing something in my GitLab project settings which is enabling Project Registries :
https://domain/group/subgroup/project/edit
Visibility,project features,permissions << Container Registry (Toggle to enable)

Sonarcloud shows 0% coverage on new code, and also shows 0% coverage on master branch with gitlab ci

I am using GitLab ci to run SonarCloud code analysis on the code.
here is my gitlab-ci.yaml
stages:
- test
before_script:
- mkdir -p ~/.ssh &&
cp $gitlab_private_key ~/.ssh/id_ed25519 &&
chmod 600 ~/.ssh/id_ed25519 &&
touch ~/.ssh/known_hosts &&
ssh-keyscan gitlab.com >> ~/.ssh/``known_hosts
variables:
SONAR_USER_HOME: "${CI_PROJECT_DIR}/.sonar" # Defines the location of the analysis task cache
GIT_DEPTH: "0" # Tells git to fetch all the branches of the project, required by the analysis task
GITLAB_PROJECT_ID: ${CI_PROJECT_ID} # needed to be exported to the project's environments
FLASK_APP: manage.py
sonarcloud-check:
image:
name: sonarsource/sonar-scanner-cli:latest
entrypoint: [""]
cache:
key: "${CI_JOB_NAME}"
paths:
- .sonar/cache
script:
- sonar-scanner
only:
- merge_requests
- master
test-merge-request-changes:
stage: test
only:
- merge_requests
image:
name: docker:19.03.13-git
services:
- name: docker:19.03.0-dind
entrypoint: ["env", "-u", "DOCKER_HOST"]
command: ["dockerd-entrypoint.sh"]
variables:
DOCKER_HOST: tcp://localhost:2375
DOCKER_TLS_CERTDIR: ""
DOCKER_DRIVER: overlay2
ENV: test
CI_DEBUG_TRACE: "true"
before_script:
- echo $CI_BUILD_TOKEN | docker login -u gitlab-ci-token --password-stdin ${CI_REGISTRY}
script:
- echo "Running Tests..."
- cp ${group_shared_vars} ${CI_PROJECT_DIR}/.env
- docker build . -f Dockerfile-testing -t test_merge_req --build-arg GITLAB_PROJECT_ID=${GITLAB_PROJECT_ID}
- docker run --cidfile="my-package.txt" test_merge_req:latest
after_script:
- touch text2.txt
- docker cp $(cat my-package.txt):/app/tests/coverage/coverage.xml coverage.xml
- docker cp $(cat my-package.txt):/app/tests/coverage/junit.xml junit.xml
timeout: 2h
artifacts:
when: always
reports:
cobertura:
- coverage.xml
junit:
- junit.xml
coverage: '/TOTAL.*\s+(\d+%)$/'
And here is my sonar-project.properties
sonar.projectKey=my_app-key
sonar.organization=my_org
sonar.sources=lib
sonar.tests=tests
sonar.exclusions=tests
sonar.language=python
sonar.python.version=3.8
I want to get the report that is generated in the container analyzed by sonarcloud on each merge request.
Also, when a code is pushed to the master branch, I want to get the coverage percent on sonarcloud of the project to be updated but it just shows 0%.
Is there any way that after the merge requests are run, we get the sonarcloud analysis on the report of the docker container?
And also getting the master branch coverage updated without having to commit the coverage.xml to the repo?
After digging into how Gitlab CI stages and jobs work and also the insight that this thread brought, I have tweaked the above GitLab ci so that it :
First runs the tests
Then uploads the coverage outputs into the path specified for the artifacts
Then in the next stage, it pulls those artifacts
Replaces the path that was replaced by pytest-cov in the <source>...</source> with a sed command. (This step is due to the path that was placed into the source element is relevant to the docker container and then after downloading the coverage report not working inside the GitLab ci container)
Running the sonar-scanner on the coverage report.
GitLab ci:
stages:
- build_test
- analyze_test
before_script:
- mkdir -p ~/.ssh &&
cp $gitlab_private_key ~/.ssh/id_ed25519 &&
chmod 600 ~/.ssh/id_ed25519 &&
touch ~/.ssh/known_hosts &&
ssh-keyscan gitlab.com >> ~/.ssh/``known_hosts
variables:
SONAR_USER_HOME: "${CI_PROJECT_DIR}/.sonar" # Defines the location of the analysis task cache
GIT_DEPTH: "0" # Tells git to fetch all the branches of the project, required by the analysis task
GITLAB_PROJECT_ID: ${CI_PROJECT_ID} # needed to be exported to the projects environments
include:
- template: Code-Quality.gitlab-ci.yml
test-merge-request:
stage: build_test
only:
- merge_requests
- master
image:
name: docker:19.03.13-git
cache:
key: "${CI_JOB_NAME}"
paths:
- build
services:
- name: docker:19.03.0-dind
entrypoint: ["env", "-u", "DOCKER_HOST"]
command: ["dockerd-entrypoint.sh"]
variables:
DOCKER_HOST: tcp://localhost:2375
DOCKER_TLS_CERTDIR: ""
DOCKER_DRIVER: overlay2
ENV: test
CI_DEBUG_TRACE: "true"
before_script:
- echo $CI_BUILD_TOKEN | docker login -u gitlab-ci-token --password-stdin ${CI_REGISTRY}
script:
- echo "Running Tests..."
- cp ${group_shared_vars} ${CI_PROJECT_DIR}/.env
- docker build . -f Dockerfile-testing -t test_merge_req --build-arg GITLAB_PROJECT_ID=${GITLAB_PROJECT_ID}
- docker run --cidfile="my-package.txt" test_merge_req:latest
after_script:
- docker cp $(cat my-package.txt):/app/tests/coverage/coverage.xml build/coverage.xml
- docker cp $(cat my-package.txt):/app/tests/coverage/junit.xml build/junit.xml
timeout: 2h
artifacts:
when: always
paths:
- build
reports:
cobertura:
- build/coverage.xml
junit:
- build/junit.xml
expire_in: 30 min
coverage: '/TOTAL.*\s+(\d+%)$/'
sonarcloud-check:
stage: analyze_test
image:
name: sonarsource/sonar-scanner-cli:latest
entrypoint: [""]
cache:
key: "${CI_JOB_NAME}"
paths:
- .sonar/cache
script:
- echo "Logging coverage..."
- sed -i "s|<source>\/app\/my_app<\/source>|<source>$CI_PROJECT_DIR\/my_app<\/source>|g" ./build/coverage.xml
- sonar-scanner
only:
- merge_requests
- master
dependencies:
- test-merge-request-changes
The takeaway from my issue was that Gitlab doesn't share the artifacts between the jobs that are placed in one GitLab ci stage, but rather shares them between different stages.
So, I just created two separate stages, one for building the tests (build_test) and one for analyzing the tests (analyze_test). Then, made the sonnar-cloud job dependent on the test-merge-request-changes job. This way, we make sure that first the tests runs in the test-merge-request-changes and then the uploaded artifacts are utilized inside the sonar-cloud stage.

Using cache to execute binary that is build in previous stage in Gitlab CI

I'm trying to use Gitlab CI to build C code in a pipeline stage and to execute it in the following one.
The problem is that the exection test stage doesn't find the binary file
I'd like not to have to rebuild at every stage in order to use less CPU during the pipeline so I though that cache should be the way to go.
Here is my .gitlab-ci.yml file :
stages:
- build
- test
- deploy
job:build:
stage: build
before_script:
- pwd
script:
- mkdir bin/
- gcc -o bin/main.exe *.c
cache:
key: build-cache
paths:
- bin/
after_script:
- ls -R
job:test:unit:
stage: test
script: echo 'unit tests'
job:test:functional:
stage: test
before_script:
- pwd
- ls -R
script:
- echo 'functionnal test'
- cd bin ; ./main.exe
job:deploy:
stage: deploy
script: echo 'Deploy stage'
So i figured out that i have to use artifacts :
Here is my code :
stages:
- build
- test
- deploy
job:build:
stage: build
before_script:
- pwd
script:
- mkdir bin/
- gcc -o bin/main.exe *.c
artifacts:
expire_in: 1 day
paths:
- bin/main.exe
after_script:
- ls -R
job:test:unit:
stage: test
script: echo 'unit tests'
job:test:functional:
stage: test
before_script:
- pwd
- ls -R
script:
- echo 'functionnal test'
- cd bin ; ./main.exe
dependencies:
- job:build
job:deploy:
stage: deploy
script: echo 'Deploy stage'

/busybox/sh: eval: line 109: apk: not found while build_backend on GitLab

I try to pass build_backend stage before build_djangoapp with Dockerfile on GitLab, but it fails with this error.
/busybox/sh: eval: line 111: apk: not found
Cleaning up file based variables
00:01
ERROR: Job failed: exit code 127
GitLab CI/CD project
.gitlab-ci.yml
# Official image for Hashicorp's Terraform. It uses light image which is Alpine
# based as it is much lighter.
#
# Entrypoint is also needed as image by default set `terraform` binary as an
# entrypoint.
image:
name: hashicorp/terraform:light
entrypoint:
- '/usr/bin/env'
- 'PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin'
# Default output file for Terraform plan
variables:
GITLAB_TF_ADDRESS: ${CI_API_V4_URL}/projects/${CI_PROJECT_ID}/terraform/state/${CI_PROJECT_NAME}
PLAN: plan.tfplan
PLAN_JSON: tfplan.json
TF_ROOT: ${CI_PROJECT_DIR}
GITLAB_TF_PASSWORD: ${CI_JOB_TOKEN}
cache:
paths:
- .terraform
before_script:
- apk --no-cache add jq
- alias convert_report="jq -r '([.resource_changes[]?.change.actions?]|flatten)|{\"create\":(map(select(.==\"create\"))|length),\"update\":(map(select(.==\"update\"))|length),\"delete\":(map(select(.==\"delete\"))|length)}'"
- cd ${TF_ROOT}
- terraform --version
- echo ${GITLAB_TF_ADDRESS}
- terraform init -backend-config="address=${GITLAB_TF_ADDRESS}" -backend-config="lock_address=${GITLAB_TF_ADDRESS}/lock" -backend-config="unlock_address=${GITLAB_TF_ADDRESS}/lock" -backend-config="username=${MY_GITLAB_USERNAME}" -backend-config="password=${MY_GITLAB_ACCESS_TOKEN}" -backend-config="lock_method=POST" -backend-config="unlock_method=DELETE" -backend-config="retry_wait_min=5"
stages:
- validate
- build
- test
- deploy
- app_deploy
validate:
stage: validate
script:
- terraform validate
plan:
stage: build
script:
- terraform plan -out=$PLAN
- terraform show --json $PLAN | convert_report > $PLAN_JSON
artifacts:
name: plan
paths:
- ${TF_ROOT}/plan.tfplan
reports:
terraform: ${TF_ROOT}/tfplan.json
# Separate apply job for manual launching Terraform as it can be destructive
# action.
apply:
stage: deploy
environment:
name: production
script:
- terraform apply -input=false $PLAN
dependencies:
- plan
when: manual
only:
- master
build_backend:
stage: build
image:
name: gcr.io/kaniko-project/executor:debug
entrypoint: [""]
script:
- echo "{\"auths\":{\"https://gitlab.amixr.io:4567\":{\"username\":\"gitlab-ci-token\",\"password\":\"$CI_JOB_TOKEN\"}}}" > /kaniko/.docker/config.json
- /kaniko/executor --cache=true --context ./djangoapp --dockerfile ./djangoapp/Dockerfile --destination $CONTAINER_IMAGE:$CI_COMMIT_REF_NAME
# https://github.com/GoogleContainerTools/kaniko#pushing-to-google-gcr
build_djangoapp:
stage: build
image:
name: gcr.io/kaniko-project/executor:debug
entrypoint: [""]
before_script:
- echo 1
script:
- export GOOGLE_APPLICATION_CREDENTIALS=$TF_VAR_gcp_creds_file
- /kaniko/executor --cache=true --context ./djangoapp --dockerfile ./djangoapp/Dockerfile --destination gcr.io/{TF_VAR_gcp_project_name}/djangoapp:$CI_COMMIT_REF_NAME
when: manual
only:
- master
needs: []
app_deploy:
image: google/cloud-sdk
stage: app_deploy
before_script:
- echo 1
environment:
name: production
script:
- gcloud auth activate-service-account --key-file=${TF_VAR_gcp_creds_file}
- gcloud container clusters get-credentials my-cluster --region us-central1 --project ${TF_VAR_gcp_project_name}
- kubectl apply -f hello-kubernetes.yaml
when: manual
only:
- master
needs: []
I looked at your project and it appears you figured out this one already.
Your .gitlab-ci.yml has a global before_script block, that is trying to install packages with apk. But your build_backend is based on the kaniko-image, which is using Busybox, which doesn't have apk (nor apt-get for that matter). You overrided in a later commit the before_script for build_backend, and this took away the problem.

Resources