Publish release upon successful pipeline - gitlab

I am using a private gitlab-runner to build an ISO and then upload the ISO and its log to my S3 bucket. This section of the pipeline works without a hitch, but I recently decided to create "release" upon a successful pipeline. For this reason, I am using the following .gitlab-ci.yml:
stages:
- build
- release
build_amd64:
stage: build
# Do not run the pipeline if the following files/directories are modified:
# except:
# changes:
# - Pictures/
# - .gitignore
# - FUNDING.yml
# - LICENSE
# - README.md
tags:
- digitalocean
timeout: 4 hours
before_script:
# Make sure the dependencies for the build are up-to-date:
- /usr/bin/apt update
- /usr/bin/apt install --only-upgrade -y curl git live-build cdebootstrap
# Save Job ID
- echo 'Saving $CI_JOB_ID to .env'
- echo BUILD_JOB_ID=$CI_JOB_ID >> .env
script:
# Build the ISO:
- ./build.sh --arch amd64 --variant i3_gaps --verbose
after_script:
- |
if [ $CI_JOB_STATUS == 'success' ]; then
# Remove everything except the "images" folder:
shopt -s extglob
/usr/bin/rm -rf !(images/)
# Upload log:
# /usr/bin/s3cmd put ./images/log s3://$S3_BUCKET/download/log
# Set log access to public:
# /usr/bin/s3cmd setacl s3://$S3_BUCKET/download/log --acl-public
# Upload iso:
# /usr/bin/s3cmd put ./images/iso s3://$S3_BUCKET/download/iso
# Set iso access to public:
# /usr/bin/s3cmd setacl s3://$S3_BUCKET/download/iso --acl-public
else
# If pipeline fails, skip the upload process:
echo 'Skipping upload process due to job failure'
sleep 5; /usr/sbin/reboot
/usr/bin/screen -dm /bin/sh -c '/usr/bin/sleep 5; /usr/sbin/reboot;'
fi
artifacts:
reports:
# Ensure that we have access to .env in the next stage
dotenv: .env
publish_release:
image: registry.gitlab.com/gitlab-org/release-cli:latest
stage: release
needs:
- job: build_amd64
artifacts: true
release:
name: 'ISO | $CI_COMMIT_SHORT_SHA | $BUILD_JOB_ID'
description: "Created and released via Gitlab CI/CD."
tag_name: "$CI_COMMIT_SHORT_SHA"
ref: '$CI_COMMIT_SHA'
assets:
links:
- name: "ISO"
url: "https://foo.bar"
link_type: "image"
- name: "Build Log"
URL: "https://foo.bar"
link_type: "other"
However I have realized that when the release job runs, it creates the release without any issues, but then creates a new pipeline with the new tag (in this case$CI_COMMIT_SHORT_SHA), instead of initially creating this new pipeline with this tag, instead of main branch.
I checked the documentation but I cannot find anything regarding this matter.
Is there a way to not run a pipeline when a release is published?

What is happening here is that because the specified tag doesn't exist, it will be created with the release. This causes a tagged pipeline to run (just as if you created the tag and pushed it).
If you just want to ignore tag pipelines, you can use a workflow rule to exclude them:
workflow:
rules:
- if: $CI_COMMIT_TAG
when: never # ignore pipelines for tags
- when: always # run the pipeline otherwise

Related

How can I use an output from my Gitlab pipeline, as a value in the pipeline?

I have a pipeline that builds an AMI image, but I'd also like to be able to use that AMI ID in an additional stage afterwards.I'm not sure how to capture an output (the AMI ID) as a value for further down the pipeline.
Here's my .gitlab-ci.yml file:
image:
name: hashicorp/packer:latest
entrypoint:
- '/usr/bin/env'
- 'PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin'
before_script:
- packer --version
stages:
- build
- deploy
get_packer:
stage: build
artifacts:
paths:
- packer
script:
- echo "Fetching packer"
- wget https://releases.hashicorp.com/packer/1.5.5/packer_1.5.5_linux_amd64.zip
- unzip packer_1.5.5_linux_amd64.zip
- chmod +x packer
deploy_packer:
stage: deploy
script:
- echo "Deploying Packer Build"
- cd aws
- packer build -only="*rhel-stig*" .
Here's the tail-end of the output from my pipeline that spits out the AMI ID:
Build 'rhel.amazon-ebs.rhel-stig' finished after 8 minutes 17 seconds.
==> Wait completed after 8 minutes 17 seconds
==> Builds finished. The artifacts of successful builds are:
--> rhel.amazon-ebs.rhel-stig: AMIs were created:
us-east-1: ami-08155b7eaa9e0274f
Cleaning up project directory and file based variables
00:00
Job succeeded
Notice how it outputs the region and the ami-ID, how can I use that AMI ID in the same pipeline if I want to add onto the pipeline like so?
theoretical .gitlab-ci.yml file
image:
name: hashicorp/packer:latest
entrypoint:
- '/usr/bin/env'
- 'PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin'
before_script:
- packer --version
stages:
- build
- deploy
- test
get_packer:
stage: build
artifacts:
paths:
- packer
script:
- echo "Fetching packer"
- wget https://releases.hashicorp.com/packer/1.5.5/packer_1.5.5_linux_amd64.zip
- unzip packer_1.5.5_linux_amd64.zip
- chmod +x packer
deploy_packer:
stage: deploy
script:
- echo "Deploying Packer Build"
- cd aws
- packer build -only="*rhel-stig*" .
test_image:
stage: test
script:
- (Do something with the outputted AMI ID from the deploy stage)
Update:
New error after initial additions
Build 'rhel.amazon-ebs.rhel-stig' finished after 9 minutes 22 seconds.
==> Wait completed after 9 minutes 22 seconds
==> Builds finished. The artifacts of successful builds are:
--> rhel.amazon-ebs.rhel-stig: AMIs were created:
us-east-1: ami-04b363eecd4fd841a
--> rhel.amazon-ebs.rhel-stig: AMIs were created:
us-east-1: ami-04b363eecd4fd841a
$ AMI_ID=$(jq -r '.builds[-1].artifact_id' manifest.json | cut -d ":" -f2)
/bin/bash: line 137: jq: command not found
Uploading artifacts for failed job
00:00
Uploading artifacts...
WARNING: image.env: no matching files. Ensure that the artifact path is relative to the working directory
ERROR: No files to upload
Cleaning up project directory and file based variables
00:01
ERROR: Job failed: exit code 1
The first thing you need to do is configure the packer build to use the manifest post-processor:
post-processor "manifest" {
output = "manifest.json"
strip_path = true
}
this will generate a json file which contains the AMI ID at the end of the build.
Then, you can use the dotenv artifacts construct to share variables with subsequent jobs.
Here's how it works:
deploy_packer:
stage: deploy
script:
- echo "Deploying Packer Build"
- cd aws
- packer build -only="*rhel-stig*" .
- AMI_ID=$(jq -r '.builds[-1].artifact_id' manifest.json | cut -d ":" -f2)
- echo "AMI_ID=$AMI_ID" > image.env
artifacts:
reports:
dotenv: aws/image.env
test_image:
stage: test
script:
- echo $AMI_ID
needs:
- job: build
artifacts: true

Gitlab CI SAST access to gl-sast-report.json artifact in subsequent stage

I am wanting to use the gl-sast-report.json file created during the SAST process in a subsequent stage of my CI but it is not found.
ci.yml
include:
- template: Security/SAST.gitlab-ci.yml
stages:
- test
- .post
sast:
rules:
- if: $CI_COMMIT_TAG
send-reports:
stage: .post
dependencies:
- sast
script:
- ls
- echo "in post stage"
- cat gl-sast-report.json
output:
Running with gitlab-runner 13.2.1 (efa30e33)
on blah blah blah
Preparing the "docker" executor
00:01
.
.
.
Preparing environment
00:01
Running on runner-zqk9bcef-project-4296-concurrent-0 via ff93ba7b6ee2...
Getting source from Git repository
00:01
Fetching changes with git depth set to 50...
Reinitialized existing Git repository in blah blah
Checking out 9c2edf67 as 39-test-dso...
Removing gl-sast-report.json
Skipping Git submodules setup
Executing "step_script" stage of the job script
00:03
$ ls
<stuff in the repo>
$ echo "in .post stage"
in post stage
$ cat gl-sast-report.json
cat: can't open 'gl-sast-report.json': No such file or directory
ERROR: Job failed: exit code 1
You can see the line Removing gl-sast-report.json which I assume is the issue.
I don't see that anywhere in the SAST.gitlab-ci.yml at https://gitlab.com/gitlab-org/gitlab/-/blob/v11.11.0-rc2-ee/lib/gitlab/ci/templates/Security/SAST.gitlab-ci.yml#L33-45
Any ideas on how to use this artifact in the next stage of my CI pipeline?
UPDATE:
So I tried out k33g_org's suggestion below but to no avail. Seems that this is due to limitations in the sast template specifically. Did the following test.
include:
- template: Security/SAST.gitlab-ci.yml
stages:
- test
- upload
something:
stage: test
script:
- echo "in something"
- echo "this is something" > something.txt
artifacts:
paths: [something.txt]
sast:
before_script:
- echo "hello from before sast"
- echo "this is in the file" > test.txt
artifacts:
reports:
sast: gl-sast-report.json
paths: [gl-sast-report.json, test.txt]
send-reports:
stage: upload
dependencies:
- sast
- something
before_script:
- echo "This is the send-reports before_script"
script:
- echo "in send-reports job"
- ls
artifacts:
reports:
sast: gl-sast-report.json
Three changes:
Updated code with k33g_org's suggestion
Created another artifact in the sast job (to see if it would pass through to send-reports job)
Created a new job (something) where I created a new something.txt artifact (to see if it would pass through to send-reports job)
Output:
Preparing environment
00:01
Running on runner-zqx7qoq-project-4296-concurrent-0 via e3fe672984b4...
Getting source from Git repository
Fetching changes with git depth set to 50...
Reinitialized existing Git repository in /<repo>
Checking out 26501c44 as <branch_name>...
Removing something.txt
Skipping Git submodules setup
Downloading artifacts
00:00
Downloading artifacts for something (64950)...
Downloading artifacts from coordinator... ok id=64950
responseStatus=200 OK token=zoJwysdq
Executing "step_script" stage of the job script
00:01
$ echo "This is the send-reports before_script"
This is the send-reports before_script
$ echo "in send-reports job"
in send-reports job
$ ls
...<other stuff in repo>
something.txt
Uploading artifacts for successful job
00:01
Uploading artifacts...
WARNING: gl-sast-report.json: no matching files
ERROR: No files to upload
Cleaning up file based variables
00:01
Job succeeded
Notes:
something.txt made it to this job
all artifacts from the sast job to not make it to subsequent jobs
I can only conclude that there is something internal to the sast template that is not allowing artifacts to propagate to subsequent jobs.
in the first job (sast) add this:
artifacts:
paths: [gl-sast-report.json]
reports:
sast: gl-sast-report.json
and in the next job (send-reports) add this
artifacts:
reports:
sast: gl-sast-report.json
Then you should be able to access the report in the next job (send-reports)
Instead of referencing the gl-sast-report.json artifact as sast report, reference it as a regular artifact.
So what you should do is declare the artifact this way
artifacts:
paths:
- 'gl-sast-report.json'
instead of
reports:
sast: gl-sast-report.json
I spent a full day banging my head against this, trying to access the gl-sast-report.json file generated by the built-in IaC scanner. Here's what ultimately worked for me:
First and foremost, DO NOT use this code suggested by GitLab's documentation:
include:
- template: Security/SAST-IaC.latest.gitlab-ci.yml
The above code works fine if all you want to do is scan for IaC vulnerabilities and download the report from the GitLab UI later. But who wants to do that?! I want to access the report in my next job and fail the pipeline if there are medium+ vulnerabilities in the report!
If that's what you want to do, you'll need to add all of the code from the official GitLab IaC scanner template to your pipeline, and then make some modifications. You can find the latest template code here, or use my example below.
Modified template:
# Read more about this feature here: https://docs.gitlab.com/ee/user/application_security/iac_scanning/
#
# Configure SAST with CI/CD variables (https://docs.gitlab.com/ee/ci/variables/index.html).
# List of available variables: https://docs.gitlab.com/ee/user/application_security/iac_scanning/index.html
variables:
# Setting this variable will affect all Security templates
# (SAST, Dependency Scanning, ...)
TEMPLATE_REGISTRY_HOST: 'registry.gitlab.com'
SECURE_ANALYZERS_PREFIX: "$TEMPLATE_REGISTRY_HOST/security-products"
SAST_IMAGE_SUFFIX: ""
SAST_EXCLUDED_PATHS: "spec, test, tests, tmp"
iac-sast:
stage: test
artifacts:
name: sast
paths:
- gl-sast-report.json
#reports:
# sast: gl-sast-report.json
when: always
rules:
- when: never
# `rules` must be overridden explicitly by each child job
# see https://gitlab.com/gitlab-org/gitlab/-/issues/218444
variables:
SEARCH_MAX_DEPTH: 4
allow_failure: true
script:
- /analyzer run
kics-iac-sast:
extends: iac-sast
image:
name: "$SAST_ANALYZER_IMAGE"
variables:
SAST_ANALYZER_IMAGE_TAG: 3
SAST_ANALYZER_IMAGE: "$SECURE_ANALYZERS_PREFIX/kics:$SAST_ANALYZER_IMAGE_TAG$SAST_IMAGE_SUFFIX"
rules:
- if: $SAST_DISABLED
when: never
- if: $SAST_EXCLUDED_ANALYZERS =~ /kics/
when: never
- if: $CI_COMMIT_BRANCH
Enforce Compliance:
stage: Compliance
before_script:
- apk add jq
script:
- jq -r '.vulnerabilities[] | select(.severity == "Critical") | (.severity, .message, .location, .identifiers[].url)' gl-sast-report.json > results.txt
- jq -r '.vulnerabilities[] | select(.severity == "High") | (.severity, .message, .location, .identifiers[].url)' gl-sast-report.json >> results.txt
- jq -r '.vulnerabilities[] | select(.severity == "Medium") | (.severity, .message, .location, .identifiers[].url)' gl-sast-report.json >> results.txt
- chmod u+x check-sast-results.sh
- ./check-sast-results.sh
You'll also need to make sure to add two stages to your pipeline (if you don't have them already):
stages:
# add these to whatever other stages you already have
- test
- Compliance
Note: it's extremely important that your job which is trying to access gl-sast-report.json ("Compliance" in this case) is not in the same stage as the sast scans themselves ("test" in this case). If they are, then your job will try to access the report before it exists and fail.
I'll include my shell script referenced in the pipeline in case you want to use that too:
#!/bin/sh
if [ -s results.txt ]; then
echo ""
echo ""
cat results.txt
echo ""
echo "ERROR: SAST SCAN FOUND VULNERABILITIES - FIX ALL VULNERABILITIES TO CONTINUE"
echo ""
exit 1
fi
This is a basic script that checks to see if the "results.txt" file has any contents. If it does, it exits with code 1 to break the pipeline and print the vulnerabilities. If there are no contents in the file, the script exits with code 0 and the pipeline continues (allowing you to deploy your infra). Save the file above as "check-sast-results.sh" in the root directory of your GitLab repository (the same level where ".gitlab-ci.yml" resides).
Hope this helps someone out there.
I've found this issue too, also impacts some of the other scanners; I raised an issue with GitLab to fix:
https://gitlab.com/gitlab-org/gitlab/-/issues/345696

Gitlab CI: Execute own Script in Kaniko Job

I have the following Job to build Images in my gitlab-ci.yml
dockerize:
stage: containerize
before_script:
- eval $($CONTEXT_SCRIPT_PATH)
environment:
name: $CONTEXT
url: XXX
image:
name: gcr.io/kaniko-project/executor:debug-v0.23.0
entrypoint: [""]
script:
- echo "{\"auths\":{\"$CI_DOCKER_REGISTRY_URL\":{\"username\":\"$CI_DOCKER_REGISTRY_USER\",\"password\":\"$CI_DOCKER_REGISTRY_PASSWORD\"}}}" > /kaniko/.docker/config.json
- /kaniko/executor --context $CI_PROJECT_DIR --dockerfile $CI_PROJECT_DIR/Dockerfile --destination $CI_DOCKER_REGISTRY_URL/portal/frontend:$CONTEXT-$CI_PIPELINE_IID --build-arg VERSION_TAG=$CONTEXT-$CI_PIPELINE_IID --build-arg COMMIT_TIME=$COMMIT_TIME
- echo "Pushed to Registry - $CI_DOCKER_REGISTRY_URL/portal/frontend:$CONTEXT-$CI_PIPELINE_IID"
retry: 2
In the before_script section the env $CONTEXT gets set.
$CONTEXT_SCRIPT_PATH is set in a global variables section:
variables:
CONTEXT_SCRIPT_PATH: "${CI_PROJECT_DIR}/Scripts/get_context.sh"
But if the Job is running, it can't find the script.
/busybox/sh: eval: line 90: /builds/portal/portal-frontend/Scripts/get_context.sh: not found
It works in other Jobs, so is Kaniko running in some separate environment? How do I specify the right Path?
Sorry for the late response, but for anyone who stumbles across this:
The script file must already exist in the current Job.
Either:
It exists in the project git repository and is automatically added by git clone before each job
It is manually created during the job - echo 'my script stuff' > $CONTEXT_SCRIPT_PATH
By default track all untracked files as artifacts - default: { artifacts: untracked } }
Or included as an artifact:
---
default: { variables: { CONTEXT_SCRIPT_PATH: "${CI_PROJECT_DIR}/Scripts/get_context.sh" } }
build-first:
stage: build
artifacts: { paths: "$CONTEXT_SCRIPT_PATH" } # tell gitlab to artifact the script
build-second:
stage: build
dependencies: [ build-first ] # run this stage after build-first succeeded
before_script: [ "eval $($CONTEXT_SCRIPT_PATH)" ]
NOTES:
global sections are deprecated- use default and put variables within, as shown above
If artifacts from a previous job in the same stage are required, use dependencies and reference the required stage
If you do not use dependencies, all artifacts from previous stages are passed to each job
References (GitLab): CI Keyword Reference Artifacts, Dependencies, Default

How to trigger Gitlab CI pipeline manually, when in normal conditions, it is triggered by webhook with commit Ids?

I have Gitlab CI pipeline which is triggered by bitbucket webhook with current and last commit ids. I also want to re-run pipeline manually whenever the build created Gitlab CI file, triggered by webhook is not working as expected.
I tried RUN-PIPELINE option but shows the error:
The form contains the following error:
No stages/jobs for this pipeline.
Here is the GitLab CI file. Include refers to other project where standard yaml file for the pipeline is kept:
include:
- project: Path/to/project
ref: bb-deployment
file: /bitbucket-deployment.yaml
variables:
TILLER_NAMESPACE: <namespace>
NAMESPACE: testenv
REPO_REF: testenvbranch
LastCommitSHA: <commit sha from webhook>
CurrentCommitSHA: <Current commit she from webhook>
Here is the detailed gitlab-ci file that is provided in other project which has stages:
stages:
- pipeline
- build
variables:
ORG: test
APP_NAME: $CI_PROJECT_NAME
before_script:
- 'which ssh-agent || ( apt-get update -y && apt-get install openssh-client -y )'
- eval $(ssh-agent -s)
- echo "$SSH_PRIIVATE_KEY2" | tr -d '\r' | ssh-add -
- mkdir -p ~/.ssh
- chmod 700 ~/.ssh
- echo "$SSH_KNOWN_HOSTS" > ~/.ssh/known_hosts
- chmod 644 ~/.ssh/known_hosts
Building CI Script:
stage: pipeline
image: python:3.6
only:
refs:
- master
script:
- |
curl https://github.com/org/scripts/branch/install.sh | bash -s latest
source /usr/local/bin/pipeline-variables.sh
git clone git#bitbucket.org:$ORG/$APP_NAME.git
cd $APP_NAME
git checkout $lastCommit
cp -r env old
git checkout $bitbucketCommit
$CMD_DIFF old env
$CMD_BUILD
$CMD_INSTALL updatedReposList.yaml deletedReposList.yaml /tmp/test $NAMESPACE $REPO_REF $ORG $APP_NAME $lastCommit $bitbucketCommit
cat cicd.yaml
mv cicd.yaml ..
artifacts:
paths:
- cicd.yaml
Deplopying Apps:
stage: build
only:
refs:
- master
trigger:
include:
artifact: cicd.yaml
job: Building CI Script
strategy: depend
In the manual trigger, instead of considering the last and current commit she, it should rebuild the application.
Any help will be appreciated.
Thank you for your comment (below), I see you are using the include directive (https://docs.gitlab.com/ce/ci/yaml/#include) in one .gitlab-ci.yml to include a GitLab CI YAML file from another project.
I can duplicate this error (No stages / jobs for this pipeline) by invoking "run pipeline" on project 1 which is configured to include GitLab CI YAML from project 2 when the project 2 GitLab CI YAML is restricted to the master branch but I'm running the project on another branch.
For example, let's say project 1 is called "stackoverflow-test" and its .gitlab-ci.yml is:
include:
- project: atsaloli/test
file: /.gitlab-ci.yml
ref: mybranch
And project 2 is called "test" (in my own namespace, atsaloli) and its .gitlab-ci.yml is:
my_job:
script: echo hello world
image: alpine
only:
refs:
- master
If I select "Run Pipeline" in the GitLab UI in project 1 on a branch other than "master", I then get the error message "No stages / jobs for this pipeline".
That's because there is no job defined for my non-master branch, and then without any job defined, I don't have any stage defined.
I hope that sheds some light on what's going on with your webhook.

Do not trigger build when creating new branch from Gitlab issue

I use Gitlab on gitlab.com and the issue tracker.
Each time I have an issue, I create a new branch on the button inside the issue, but this triggers a new build (pipeline) in CI.
I don't want this because this branch is coming from master and doesn't need to be built.
How can I achieve that ? Is this a gitlab-ci.yml modification or a repository related configuration ?
You can define, in which branches particular steps of your build will run, via only and except parameters to the builds: https://docs.gitlab.com/ee/ci/yaml/#only-and-except-complex
For example, run java build in all branches except issue branches:
java-build:
stage: build
except:
- /^issue-.*$/
script:
- mvn -U -e install
image: maven:3.5-jdk-8
Restrict build only to master and release branches:
java-build:
stage: build
only:
- master
- /^RELEASE-.*$/
script:
- mvn -U -e install
image: maven:3.5-jdk-8
Based on the Commit message we can avoid building for the newly created branch on GitLab CI/CD pipeline
Build Branch(master):
stage: build
only:
refs:
- master
variables:
- $CI_COMMIT_MESSAGE =~ /^\[master\].*$/
script:
- echo "master branch"
# - sleep 60 | echo "hello master"
# when: delayed
# start_in: 3 minutes
interruptible: true
Build Branch(release):
stage: build
only:
refs:
- /^build_.*$/
variables:
- $CI_COMMIT_MESSAGE =~ /^\[build\].*$/
script:
- echo "release branch"
# - sleep 60 | echo "hello release"
# when: delayed
# start_in: 3 minutes
interruptible: true
The branches will trigger when the commit message has start with [master] or [build]

Resources