Gitlab CI: Execute own Script in Kaniko Job - gitlab

I have the following Job to build Images in my gitlab-ci.yml
dockerize:
stage: containerize
before_script:
- eval $($CONTEXT_SCRIPT_PATH)
environment:
name: $CONTEXT
url: XXX
image:
name: gcr.io/kaniko-project/executor:debug-v0.23.0
entrypoint: [""]
script:
- echo "{\"auths\":{\"$CI_DOCKER_REGISTRY_URL\":{\"username\":\"$CI_DOCKER_REGISTRY_USER\",\"password\":\"$CI_DOCKER_REGISTRY_PASSWORD\"}}}" > /kaniko/.docker/config.json
- /kaniko/executor --context $CI_PROJECT_DIR --dockerfile $CI_PROJECT_DIR/Dockerfile --destination $CI_DOCKER_REGISTRY_URL/portal/frontend:$CONTEXT-$CI_PIPELINE_IID --build-arg VERSION_TAG=$CONTEXT-$CI_PIPELINE_IID --build-arg COMMIT_TIME=$COMMIT_TIME
- echo "Pushed to Registry - $CI_DOCKER_REGISTRY_URL/portal/frontend:$CONTEXT-$CI_PIPELINE_IID"
retry: 2
In the before_script section the env $CONTEXT gets set.
$CONTEXT_SCRIPT_PATH is set in a global variables section:
variables:
CONTEXT_SCRIPT_PATH: "${CI_PROJECT_DIR}/Scripts/get_context.sh"
But if the Job is running, it can't find the script.
/busybox/sh: eval: line 90: /builds/portal/portal-frontend/Scripts/get_context.sh: not found
It works in other Jobs, so is Kaniko running in some separate environment? How do I specify the right Path?

Sorry for the late response, but for anyone who stumbles across this:
The script file must already exist in the current Job.
Either:
It exists in the project git repository and is automatically added by git clone before each job
It is manually created during the job - echo 'my script stuff' > $CONTEXT_SCRIPT_PATH
By default track all untracked files as artifacts - default: { artifacts: untracked } }
Or included as an artifact:
---
default: { variables: { CONTEXT_SCRIPT_PATH: "${CI_PROJECT_DIR}/Scripts/get_context.sh" } }
build-first:
stage: build
artifacts: { paths: "$CONTEXT_SCRIPT_PATH" } # tell gitlab to artifact the script
build-second:
stage: build
dependencies: [ build-first ] # run this stage after build-first succeeded
before_script: [ "eval $($CONTEXT_SCRIPT_PATH)" ]
NOTES:
global sections are deprecated- use default and put variables within, as shown above
If artifacts from a previous job in the same stage are required, use dependencies and reference the required stage
If you do not use dependencies, all artifacts from previous stages are passed to each job
References (GitLab): CI Keyword Reference Artifacts, Dependencies, Default

Related

How to pass global variable value to next stage in GitLab CI/CD

Based on GitLab documentation You can use the variables keyword to pass CI/CD variables to a downstream pipeline.
I have a global variable DATABASE_URL
The init stage retrieves connection string from the AWS Secret manager and sets it
to DATABASE_URL
Then in the deploy stage I want to use that variable to deploy database. However in the deploy stage variable's value is empty.
variables:
DATABASE_URL: ""
default:
tags:
- myrunner
stages:
- init
- deploy
init-job:
image: docker.xxxx/awscli
stage: init
script:
- SECRET_VALUE="$(aws secretsmanager get-secret-value --secret-id my_secret --region us-west-2 --output text --query SecretString)"
- DATABASE_URL="$(jq -r .DATABASE_URL <<< $SECRET_VALUE)"
- echo "$DATABASE_URL"
deploy-dev-database:
image: node:14
stage: deploy
environment:
name: development
script:
- echo "$DATABASE_URL"
- npm install
- npx sequelize-cli db:migrate
rules:
- if: $CI_COMMIT_REF_NAME == "dev"
Init Job. echos the DATABASE_URL
However DATABASE_URL is empty in deploy stage
Questions
1> How do I pass the global variable across the stages.
2> NodeJS database deployment process will be using this variable as process.env.DATABASE_URL will it be available to nodejs environment?
Variables are set by precedence, and when you print a variable inside of a job, it will look for the variable inside itself (the same job), and then start moving up to what's defined in the CI YAML file (variables: section), then the project, group, and instance. The job will never look at other jobs.
If you want to pass a variable from one job to another, you would want to make sure you don't set the variable at all and instead pass the variable from one job to another following the documentation on passing environment variables to another job.
Basically,
Make sure to remove DATABASE_URL: "" from the variables section.
Make the last line of your init-job script - echo "$DATABASE_URL" >> init.env. You can call your .env file whatever you want of course.
Add an artifacts: section to your init-job.
Add a dependencies: or needs: section to your deploy-dev-database job to pull the variable.
You should end up with something like this:
stages:
- init
- deploy
init-job:
image: docker.xxxx/awscli
stage: init
script:
- SECRET_VALUE="$(aws secretsmanager get-secret-value --secret-id my_secret --region us-west-2 --output text --query SecretString)"
- DATABASE_URL="$(jq -r .DATABASE_URL <<< $SECRET_VALUE)"
- echo "$DATABASE_URL" >> init.env
artifacts:
reports:
dotenv: init.env
deploy-dev-database:
image: node:14
stage: deploy
dependencies:
- init-job
environment:
name: development
script:
- echo "$DATABASE_URL"
- npm install
- npx sequelize-cli db:migrate
rules:
- if: $CI_COMMIT_REF_NAME == "dev"

Gitlab CI: How do I use the environment variable from one stage as needs:project ref in another

I have two jobs in the same project: job A and job B.
job A creates an environment variable EXTERNAL_PROJ_REF=some_tag and exports it through a .env file.
job B needs to download artifacts from an external_project and package them with other artifacts from the current project. I want to be able to dynamically choose the commit reference from which these external artifacts get downloaded. I am trying to use the environment variable EXTERNAL_PROJ_REF as the ref for external_project needed by job B.
job A:
stage: build
script:
- echo "EXTERNAL_PROJ_REF=`./generate_variable.sh`" > build.env # evaluates to EXTERNAL_PROJ_REF=some_tag
artifacts:
reports:
dotenv: build.env
job B:
stage: package
script:
- ./do_packaging_job.sh
needs:
- job: job A
artifacts: true
- project: external_project
ref: $EXTERNAL_PROJ_REF
job: external_job
artifacts: true
When I run this pipeline though, job B instantly fails with the following error:
This job depends on other jobs with expired/erased artifacts:
If I hardcode ref to some_tag, the job does not fail, and I can confirm the EXTERNAL_PROJ_REF is successfully passed to job B.
job B:
stage: package
script:
- echo "Ref = $EXTERNAL_PROJ_REF" # Correctly prints "Ref = some_tag"
- ./do_packaging_job.sh
needs:
- job: job A
artifacts: true
- project: external_project
ref: some_tag # hardcoded so the job doesn't fail
job: external_job
artifacts: true
However, when I have ref:$EXTERNAL_PROJ_REF, the pipeline fails. Can somebody tell me if I'm missing something?
A simple solution I use for sharing variables between two GitLab CI jobs:
stages:
- build
- package
job A:
stage: build
script:
- echo "EXTERNAL_PROJ_REF=`./generate_variable.sh`" > build.env
artifacts:
paths:
- build.env
job B:
stage: package
before_script:
- export $(cat build.env | xargs)
script:
- ./do_packaging_job.sh
I finally realized Gitlab does not support what I want to do, at least not this way. According to this link, a variable passed from a different job can only be used in before_script, script or after_script sections of a job; it cannot be used to configure jobs. I cannot use it the needs section of job B.
Luckily, I found a simple workaround using the Gitlab API. I have API access to external_project, so I just use wget to download the artifact I need from the dynamically selected commit reference. Afterwards, I directly pass the artifact to job B.
job A:
stage: build
script:
# Dynamically create commit reference
- EXTERNAL_PROJ_REF=`./generate_commit_ref.sh`
# Download artifact with Gitlab API
- wget --header "PRIVATE-TOKEN:${EXTERNAL_PROJ_TOKEN}" --output-document outputFileName.file "${CI_API_V4_URL}/projects/${EXTERNAL_PROJ_ID}/jobs/artifacts/${EXTERNAL_PROJ_REF}/raw/${EXTERNAL_ARTIFACT_PATH}?job=${EXTERNAL_JOB_NAME}"
# btw CI_API_V4_URL is a gitlab defined variable
artifacts:
paths:
- outputFileName.file
job B:
stage: package
script:
- ./do_packaging_job.sh
needs:
# Now this packaging job only needs job A. It doesn't need the external job anymore
- job: job A
artifacts: true

Gitlab CI SAST access to gl-sast-report.json artifact in subsequent stage

I am wanting to use the gl-sast-report.json file created during the SAST process in a subsequent stage of my CI but it is not found.
ci.yml
include:
- template: Security/SAST.gitlab-ci.yml
stages:
- test
- .post
sast:
rules:
- if: $CI_COMMIT_TAG
send-reports:
stage: .post
dependencies:
- sast
script:
- ls
- echo "in post stage"
- cat gl-sast-report.json
output:
Running with gitlab-runner 13.2.1 (efa30e33)
on blah blah blah
Preparing the "docker" executor
00:01
.
.
.
Preparing environment
00:01
Running on runner-zqk9bcef-project-4296-concurrent-0 via ff93ba7b6ee2...
Getting source from Git repository
00:01
Fetching changes with git depth set to 50...
Reinitialized existing Git repository in blah blah
Checking out 9c2edf67 as 39-test-dso...
Removing gl-sast-report.json
Skipping Git submodules setup
Executing "step_script" stage of the job script
00:03
$ ls
<stuff in the repo>
$ echo "in .post stage"
in post stage
$ cat gl-sast-report.json
cat: can't open 'gl-sast-report.json': No such file or directory
ERROR: Job failed: exit code 1
You can see the line Removing gl-sast-report.json which I assume is the issue.
I don't see that anywhere in the SAST.gitlab-ci.yml at https://gitlab.com/gitlab-org/gitlab/-/blob/v11.11.0-rc2-ee/lib/gitlab/ci/templates/Security/SAST.gitlab-ci.yml#L33-45
Any ideas on how to use this artifact in the next stage of my CI pipeline?
UPDATE:
So I tried out k33g_org's suggestion below but to no avail. Seems that this is due to limitations in the sast template specifically. Did the following test.
include:
- template: Security/SAST.gitlab-ci.yml
stages:
- test
- upload
something:
stage: test
script:
- echo "in something"
- echo "this is something" > something.txt
artifacts:
paths: [something.txt]
sast:
before_script:
- echo "hello from before sast"
- echo "this is in the file" > test.txt
artifacts:
reports:
sast: gl-sast-report.json
paths: [gl-sast-report.json, test.txt]
send-reports:
stage: upload
dependencies:
- sast
- something
before_script:
- echo "This is the send-reports before_script"
script:
- echo "in send-reports job"
- ls
artifacts:
reports:
sast: gl-sast-report.json
Three changes:
Updated code with k33g_org's suggestion
Created another artifact in the sast job (to see if it would pass through to send-reports job)
Created a new job (something) where I created a new something.txt artifact (to see if it would pass through to send-reports job)
Output:
Preparing environment
00:01
Running on runner-zqx7qoq-project-4296-concurrent-0 via e3fe672984b4...
Getting source from Git repository
Fetching changes with git depth set to 50...
Reinitialized existing Git repository in /<repo>
Checking out 26501c44 as <branch_name>...
Removing something.txt
Skipping Git submodules setup
Downloading artifacts
00:00
Downloading artifacts for something (64950)...
Downloading artifacts from coordinator... ok id=64950
responseStatus=200 OK token=zoJwysdq
Executing "step_script" stage of the job script
00:01
$ echo "This is the send-reports before_script"
This is the send-reports before_script
$ echo "in send-reports job"
in send-reports job
$ ls
...<other stuff in repo>
something.txt
Uploading artifacts for successful job
00:01
Uploading artifacts...
WARNING: gl-sast-report.json: no matching files
ERROR: No files to upload
Cleaning up file based variables
00:01
Job succeeded
Notes:
something.txt made it to this job
all artifacts from the sast job to not make it to subsequent jobs
I can only conclude that there is something internal to the sast template that is not allowing artifacts to propagate to subsequent jobs.
in the first job (sast) add this:
artifacts:
paths: [gl-sast-report.json]
reports:
sast: gl-sast-report.json
and in the next job (send-reports) add this
artifacts:
reports:
sast: gl-sast-report.json
Then you should be able to access the report in the next job (send-reports)
Instead of referencing the gl-sast-report.json artifact as sast report, reference it as a regular artifact.
So what you should do is declare the artifact this way
artifacts:
paths:
- 'gl-sast-report.json'
instead of
reports:
sast: gl-sast-report.json
I spent a full day banging my head against this, trying to access the gl-sast-report.json file generated by the built-in IaC scanner. Here's what ultimately worked for me:
First and foremost, DO NOT use this code suggested by GitLab's documentation:
include:
- template: Security/SAST-IaC.latest.gitlab-ci.yml
The above code works fine if all you want to do is scan for IaC vulnerabilities and download the report from the GitLab UI later. But who wants to do that?! I want to access the report in my next job and fail the pipeline if there are medium+ vulnerabilities in the report!
If that's what you want to do, you'll need to add all of the code from the official GitLab IaC scanner template to your pipeline, and then make some modifications. You can find the latest template code here, or use my example below.
Modified template:
# Read more about this feature here: https://docs.gitlab.com/ee/user/application_security/iac_scanning/
#
# Configure SAST with CI/CD variables (https://docs.gitlab.com/ee/ci/variables/index.html).
# List of available variables: https://docs.gitlab.com/ee/user/application_security/iac_scanning/index.html
variables:
# Setting this variable will affect all Security templates
# (SAST, Dependency Scanning, ...)
TEMPLATE_REGISTRY_HOST: 'registry.gitlab.com'
SECURE_ANALYZERS_PREFIX: "$TEMPLATE_REGISTRY_HOST/security-products"
SAST_IMAGE_SUFFIX: ""
SAST_EXCLUDED_PATHS: "spec, test, tests, tmp"
iac-sast:
stage: test
artifacts:
name: sast
paths:
- gl-sast-report.json
#reports:
# sast: gl-sast-report.json
when: always
rules:
- when: never
# `rules` must be overridden explicitly by each child job
# see https://gitlab.com/gitlab-org/gitlab/-/issues/218444
variables:
SEARCH_MAX_DEPTH: 4
allow_failure: true
script:
- /analyzer run
kics-iac-sast:
extends: iac-sast
image:
name: "$SAST_ANALYZER_IMAGE"
variables:
SAST_ANALYZER_IMAGE_TAG: 3
SAST_ANALYZER_IMAGE: "$SECURE_ANALYZERS_PREFIX/kics:$SAST_ANALYZER_IMAGE_TAG$SAST_IMAGE_SUFFIX"
rules:
- if: $SAST_DISABLED
when: never
- if: $SAST_EXCLUDED_ANALYZERS =~ /kics/
when: never
- if: $CI_COMMIT_BRANCH
Enforce Compliance:
stage: Compliance
before_script:
- apk add jq
script:
- jq -r '.vulnerabilities[] | select(.severity == "Critical") | (.severity, .message, .location, .identifiers[].url)' gl-sast-report.json > results.txt
- jq -r '.vulnerabilities[] | select(.severity == "High") | (.severity, .message, .location, .identifiers[].url)' gl-sast-report.json >> results.txt
- jq -r '.vulnerabilities[] | select(.severity == "Medium") | (.severity, .message, .location, .identifiers[].url)' gl-sast-report.json >> results.txt
- chmod u+x check-sast-results.sh
- ./check-sast-results.sh
You'll also need to make sure to add two stages to your pipeline (if you don't have them already):
stages:
# add these to whatever other stages you already have
- test
- Compliance
Note: it's extremely important that your job which is trying to access gl-sast-report.json ("Compliance" in this case) is not in the same stage as the sast scans themselves ("test" in this case). If they are, then your job will try to access the report before it exists and fail.
I'll include my shell script referenced in the pipeline in case you want to use that too:
#!/bin/sh
if [ -s results.txt ]; then
echo ""
echo ""
cat results.txt
echo ""
echo "ERROR: SAST SCAN FOUND VULNERABILITIES - FIX ALL VULNERABILITIES TO CONTINUE"
echo ""
exit 1
fi
This is a basic script that checks to see if the "results.txt" file has any contents. If it does, it exits with code 1 to break the pipeline and print the vulnerabilities. If there are no contents in the file, the script exits with code 0 and the pipeline continues (allowing you to deploy your infra). Save the file above as "check-sast-results.sh" in the root directory of your GitLab repository (the same level where ".gitlab-ci.yml" resides).
Hope this helps someone out there.
I've found this issue too, also impacts some of the other scanners; I raised an issue with GitLab to fix:
https://gitlab.com/gitlab-org/gitlab/-/issues/345696

How to trigger Gitlab CI pipeline manually, when in normal conditions, it is triggered by webhook with commit Ids?

I have Gitlab CI pipeline which is triggered by bitbucket webhook with current and last commit ids. I also want to re-run pipeline manually whenever the build created Gitlab CI file, triggered by webhook is not working as expected.
I tried RUN-PIPELINE option but shows the error:
The form contains the following error:
No stages/jobs for this pipeline.
Here is the GitLab CI file. Include refers to other project where standard yaml file for the pipeline is kept:
include:
- project: Path/to/project
ref: bb-deployment
file: /bitbucket-deployment.yaml
variables:
TILLER_NAMESPACE: <namespace>
NAMESPACE: testenv
REPO_REF: testenvbranch
LastCommitSHA: <commit sha from webhook>
CurrentCommitSHA: <Current commit she from webhook>
Here is the detailed gitlab-ci file that is provided in other project which has stages:
stages:
- pipeline
- build
variables:
ORG: test
APP_NAME: $CI_PROJECT_NAME
before_script:
- 'which ssh-agent || ( apt-get update -y && apt-get install openssh-client -y )'
- eval $(ssh-agent -s)
- echo "$SSH_PRIIVATE_KEY2" | tr -d '\r' | ssh-add -
- mkdir -p ~/.ssh
- chmod 700 ~/.ssh
- echo "$SSH_KNOWN_HOSTS" > ~/.ssh/known_hosts
- chmod 644 ~/.ssh/known_hosts
Building CI Script:
stage: pipeline
image: python:3.6
only:
refs:
- master
script:
- |
curl https://github.com/org/scripts/branch/install.sh | bash -s latest
source /usr/local/bin/pipeline-variables.sh
git clone git#bitbucket.org:$ORG/$APP_NAME.git
cd $APP_NAME
git checkout $lastCommit
cp -r env old
git checkout $bitbucketCommit
$CMD_DIFF old env
$CMD_BUILD
$CMD_INSTALL updatedReposList.yaml deletedReposList.yaml /tmp/test $NAMESPACE $REPO_REF $ORG $APP_NAME $lastCommit $bitbucketCommit
cat cicd.yaml
mv cicd.yaml ..
artifacts:
paths:
- cicd.yaml
Deplopying Apps:
stage: build
only:
refs:
- master
trigger:
include:
artifact: cicd.yaml
job: Building CI Script
strategy: depend
In the manual trigger, instead of considering the last and current commit she, it should rebuild the application.
Any help will be appreciated.
Thank you for your comment (below), I see you are using the include directive (https://docs.gitlab.com/ce/ci/yaml/#include) in one .gitlab-ci.yml to include a GitLab CI YAML file from another project.
I can duplicate this error (No stages / jobs for this pipeline) by invoking "run pipeline" on project 1 which is configured to include GitLab CI YAML from project 2 when the project 2 GitLab CI YAML is restricted to the master branch but I'm running the project on another branch.
For example, let's say project 1 is called "stackoverflow-test" and its .gitlab-ci.yml is:
include:
- project: atsaloli/test
file: /.gitlab-ci.yml
ref: mybranch
And project 2 is called "test" (in my own namespace, atsaloli) and its .gitlab-ci.yml is:
my_job:
script: echo hello world
image: alpine
only:
refs:
- master
If I select "Run Pipeline" in the GitLab UI in project 1 on a branch other than "master", I then get the error message "No stages / jobs for this pipeline".
That's because there is no job defined for my non-master branch, and then without any job defined, I don't have any stage defined.
I hope that sheds some light on what's going on with your webhook.

How to reuse job in .gitlab-ci.yml

I currently have two jobs in my CI file which are nearly identical.
The first is for manually compiling a release build from any git branch.
deploy_internal:
stage: deploy
script: ....<deploy code>
when: manual
The second is to be used by the scheduler to release a daily build from develop branch.
scheduled_deploy_internal:
stage: deploy
script: ....<deploy code from deploy_internal copy/pasted>
only:
variables:
- $MY_DEPLOY_INTERNAL != null
This feels wrong to have all that deploy code repeated in two places. It gets worse. There are also deploy_external, deploy_release, and scheduled variants.
My question:
Is there a way that I can combine deploy_internal and scheduled_deploy_internal such that the manual/scheduled behaviour is retained (DRY basically)?
Alternatively: Is there is a better way that I should structure my jobs?
Edit:
Original title: Deploy job. Execute manually except when scheduled
You can use YAML anchors and aliases to reuse the script.
deploy_internal:
stage: deploy
script:
- &deployment_scripts |
echo "Deployment Started"
bash command 1
bash command 2
when: manual
scheduled_deploy_internal:
stage: deploy
script:
- *deployment_scripts
only:
variables:
- $MY_DEPLOY_INTERNAL != null
Or you can use extends keyword.
.deployment_script:
script:
- echo "Deployment started"
- bash command 1
- bash command 2
deploy_internal:
extends: .deployment_script
stage: deploy
when: manual
scheduled_deploy_internal:
extends: .deployment_script
stage: deploy
only:
variables:
- $MY_DEPLOY_INTERNAL != null
Use GitLab's default section containing a before_script:
default:
before_script:
- ....<deploy code>
job1:
stage: deploy
script: ....<code after than deploy>
job2:
stage: deploy
script: ....<code after than deploy>
Note: the default section fails to function as such if you try to execute a job locally with the gitlab-runner exec command - use YAML anchors instead.

Resources