How to trigger Gitlab CI pipeline manually, when in normal conditions, it is triggered by webhook with commit Ids? - gitlab

I have Gitlab CI pipeline which is triggered by bitbucket webhook with current and last commit ids. I also want to re-run pipeline manually whenever the build created Gitlab CI file, triggered by webhook is not working as expected.
I tried RUN-PIPELINE option but shows the error:
The form contains the following error:
No stages/jobs for this pipeline.
Here is the GitLab CI file. Include refers to other project where standard yaml file for the pipeline is kept:
include:
- project: Path/to/project
ref: bb-deployment
file: /bitbucket-deployment.yaml
variables:
TILLER_NAMESPACE: <namespace>
NAMESPACE: testenv
REPO_REF: testenvbranch
LastCommitSHA: <commit sha from webhook>
CurrentCommitSHA: <Current commit she from webhook>
Here is the detailed gitlab-ci file that is provided in other project which has stages:
stages:
- pipeline
- build
variables:
ORG: test
APP_NAME: $CI_PROJECT_NAME
before_script:
- 'which ssh-agent || ( apt-get update -y && apt-get install openssh-client -y )'
- eval $(ssh-agent -s)
- echo "$SSH_PRIIVATE_KEY2" | tr -d '\r' | ssh-add -
- mkdir -p ~/.ssh
- chmod 700 ~/.ssh
- echo "$SSH_KNOWN_HOSTS" > ~/.ssh/known_hosts
- chmod 644 ~/.ssh/known_hosts
Building CI Script:
stage: pipeline
image: python:3.6
only:
refs:
- master
script:
- |
curl https://github.com/org/scripts/branch/install.sh | bash -s latest
source /usr/local/bin/pipeline-variables.sh
git clone git#bitbucket.org:$ORG/$APP_NAME.git
cd $APP_NAME
git checkout $lastCommit
cp -r env old
git checkout $bitbucketCommit
$CMD_DIFF old env
$CMD_BUILD
$CMD_INSTALL updatedReposList.yaml deletedReposList.yaml /tmp/test $NAMESPACE $REPO_REF $ORG $APP_NAME $lastCommit $bitbucketCommit
cat cicd.yaml
mv cicd.yaml ..
artifacts:
paths:
- cicd.yaml
Deplopying Apps:
stage: build
only:
refs:
- master
trigger:
include:
artifact: cicd.yaml
job: Building CI Script
strategy: depend
In the manual trigger, instead of considering the last and current commit she, it should rebuild the application.
Any help will be appreciated.

Thank you for your comment (below), I see you are using the include directive (https://docs.gitlab.com/ce/ci/yaml/#include) in one .gitlab-ci.yml to include a GitLab CI YAML file from another project.
I can duplicate this error (No stages / jobs for this pipeline) by invoking "run pipeline" on project 1 which is configured to include GitLab CI YAML from project 2 when the project 2 GitLab CI YAML is restricted to the master branch but I'm running the project on another branch.
For example, let's say project 1 is called "stackoverflow-test" and its .gitlab-ci.yml is:
include:
- project: atsaloli/test
file: /.gitlab-ci.yml
ref: mybranch
And project 2 is called "test" (in my own namespace, atsaloli) and its .gitlab-ci.yml is:
my_job:
script: echo hello world
image: alpine
only:
refs:
- master
If I select "Run Pipeline" in the GitLab UI in project 1 on a branch other than "master", I then get the error message "No stages / jobs for this pipeline".
That's because there is no job defined for my non-master branch, and then without any job defined, I don't have any stage defined.
I hope that sheds some light on what's going on with your webhook.

Related

GitLab CI Pipeline: Pipeline cannot be run

I can't find out why the GitLab CI Pipelines for my Repo won't run. I have a .gitlab-ci.yml file and the feature enabled, but the pipeline won't run. Also if I try to trigger the pipeline manually I get the following error back.
Pipeline cannot be run.
Pipeline will not run for the selected trigger. The rules configuration prevented any jobs from being added to the pipeline.
The CI feature is enabled.
Here is my .gitlab-ci.yml file.
stages:
- build
- deploy
npm-run-build:
stage: build
image: node:19
only:
- main
cache:
key: ${CI_COMMIT_REF_SLUG}-build
paths:
- dist/
script:
- cp .env.example .env
- npm ci
- npm run build-only
deploy-dist:
stage: deploy
image: fedora:latest
only:
- main
environment:
name: production
url: https://example.com
needs:
- npm-run-build
cache:
key: ${CI_COMMIT_REF_SLUG}-build
paths:
- dist/
before_script:
- dnf install -y openssh-clients
- mkdir -p ~/.ssh
- echo "$SSH_PRIVATE_KEY" > ~/.ssh/id_rsa
- chmod 600 ~/.ssh/id_rsa
- ssh-keyscan -t rsa example.com > ~/.ssh/known_hosts
script:
# create remote project dir if not available
- ssh thomas#example.com "mkdir -p /home/thomas/example.com"
# upload project files
- scp -prq . thomas#example.com:/home/thomas/example.com
# restart the container
- ssh thomas#example.com "cd /home/thomas/example.com && docker-compose down && docker-compose up -d"
Thank you! 😁
As D Malan pointed out in the comments, I have restricted the runs with only to the main branch. But the branch name is actually master 🤦
So I just changed the rule form main to master and now it is running 👌

Publish release upon successful pipeline

I am using a private gitlab-runner to build an ISO and then upload the ISO and its log to my S3 bucket. This section of the pipeline works without a hitch, but I recently decided to create "release" upon a successful pipeline. For this reason, I am using the following .gitlab-ci.yml:
stages:
- build
- release
build_amd64:
stage: build
# Do not run the pipeline if the following files/directories are modified:
# except:
# changes:
# - Pictures/
# - .gitignore
# - FUNDING.yml
# - LICENSE
# - README.md
tags:
- digitalocean
timeout: 4 hours
before_script:
# Make sure the dependencies for the build are up-to-date:
- /usr/bin/apt update
- /usr/bin/apt install --only-upgrade -y curl git live-build cdebootstrap
# Save Job ID
- echo 'Saving $CI_JOB_ID to .env'
- echo BUILD_JOB_ID=$CI_JOB_ID >> .env
script:
# Build the ISO:
- ./build.sh --arch amd64 --variant i3_gaps --verbose
after_script:
- |
if [ $CI_JOB_STATUS == 'success' ]; then
# Remove everything except the "images" folder:
shopt -s extglob
/usr/bin/rm -rf !(images/)
# Upload log:
# /usr/bin/s3cmd put ./images/log s3://$S3_BUCKET/download/log
# Set log access to public:
# /usr/bin/s3cmd setacl s3://$S3_BUCKET/download/log --acl-public
# Upload iso:
# /usr/bin/s3cmd put ./images/iso s3://$S3_BUCKET/download/iso
# Set iso access to public:
# /usr/bin/s3cmd setacl s3://$S3_BUCKET/download/iso --acl-public
else
# If pipeline fails, skip the upload process:
echo 'Skipping upload process due to job failure'
sleep 5; /usr/sbin/reboot
/usr/bin/screen -dm /bin/sh -c '/usr/bin/sleep 5; /usr/sbin/reboot;'
fi
artifacts:
reports:
# Ensure that we have access to .env in the next stage
dotenv: .env
publish_release:
image: registry.gitlab.com/gitlab-org/release-cli:latest
stage: release
needs:
- job: build_amd64
artifacts: true
release:
name: 'ISO | $CI_COMMIT_SHORT_SHA | $BUILD_JOB_ID'
description: "Created and released via Gitlab CI/CD."
tag_name: "$CI_COMMIT_SHORT_SHA"
ref: '$CI_COMMIT_SHA'
assets:
links:
- name: "ISO"
url: "https://foo.bar"
link_type: "image"
- name: "Build Log"
URL: "https://foo.bar"
link_type: "other"
However I have realized that when the release job runs, it creates the release without any issues, but then creates a new pipeline with the new tag (in this case$CI_COMMIT_SHORT_SHA), instead of initially creating this new pipeline with this tag, instead of main branch.
I checked the documentation but I cannot find anything regarding this matter.
Is there a way to not run a pipeline when a release is published?
What is happening here is that because the specified tag doesn't exist, it will be created with the release. This causes a tagged pipeline to run (just as if you created the tag and pushed it).
If you just want to ignore tag pipelines, you can use a workflow rule to exclude them:
workflow:
rules:
- if: $CI_COMMIT_TAG
when: never # ignore pipelines for tags
- when: always # run the pipeline otherwise

Merge Gitlab Runner into existing branch

I'm right now on a stage of automating some parts of our code. The idea is simple, with every commit in our gitlab the script takes Excel from given directory and divides it's worksheets into individual Excel files and saves them into another directory. The problem right now is that I want to merge those files into our branch from Gitlab Runner. I tried editing gitlab-ci file, but I can't get it to work. I get this error:
The request URL returned error: 403
I tried adding personal token and then using it for pushing like this:
variables:
GIT_STRATEGY: clone
build-job:
stage: build
tags:
- data_dict
script:
- echo "Hello!"
- ls
- python3 -V
- pip3 list
test-job1:
stage: test
tags:
- data_dict
script:
- python3 Experiments/ConfAutomation.py
test-job2:
stage: test
tags:
- data_dict
script:
- git show-ref
- git remote -v
- echo "$RUNNER_ACCESS_TOKEN"
- echo "Print runner branch name"
- git config user.email "user#mail.com"
- git config user.name "name"
# - git remote set-url --push origin https://gitlab-ci:"$RUNNER_ACCESS_TOKEN"#gitlab/dir.git
- git add .
- git commit --allow-empty -m "Files from runner to branch"
- git push origin https://gitlab-ci:"$RUNNER_ACCESS_TOKEN"#gitlab/dir.git <branch-name>
deploy-prod:
stage: deploy
tags:
- data_dict
script:
- echo "This job deploys something."
I tried some stuff but can't seem to get it to work. Maybe you had similiar problem or you have some ideas ? Any help would be greatly appreciated.
Try using {$RUNNER_ACCESS_TOKEN} without double quotes.

Gitlab CI SAST access to gl-sast-report.json artifact in subsequent stage

I am wanting to use the gl-sast-report.json file created during the SAST process in a subsequent stage of my CI but it is not found.
ci.yml
include:
- template: Security/SAST.gitlab-ci.yml
stages:
- test
- .post
sast:
rules:
- if: $CI_COMMIT_TAG
send-reports:
stage: .post
dependencies:
- sast
script:
- ls
- echo "in post stage"
- cat gl-sast-report.json
output:
Running with gitlab-runner 13.2.1 (efa30e33)
on blah blah blah
Preparing the "docker" executor
00:01
.
.
.
Preparing environment
00:01
Running on runner-zqk9bcef-project-4296-concurrent-0 via ff93ba7b6ee2...
Getting source from Git repository
00:01
Fetching changes with git depth set to 50...
Reinitialized existing Git repository in blah blah
Checking out 9c2edf67 as 39-test-dso...
Removing gl-sast-report.json
Skipping Git submodules setup
Executing "step_script" stage of the job script
00:03
$ ls
<stuff in the repo>
$ echo "in .post stage"
in post stage
$ cat gl-sast-report.json
cat: can't open 'gl-sast-report.json': No such file or directory
ERROR: Job failed: exit code 1
You can see the line Removing gl-sast-report.json which I assume is the issue.
I don't see that anywhere in the SAST.gitlab-ci.yml at https://gitlab.com/gitlab-org/gitlab/-/blob/v11.11.0-rc2-ee/lib/gitlab/ci/templates/Security/SAST.gitlab-ci.yml#L33-45
Any ideas on how to use this artifact in the next stage of my CI pipeline?
UPDATE:
So I tried out k33g_org's suggestion below but to no avail. Seems that this is due to limitations in the sast template specifically. Did the following test.
include:
- template: Security/SAST.gitlab-ci.yml
stages:
- test
- upload
something:
stage: test
script:
- echo "in something"
- echo "this is something" > something.txt
artifacts:
paths: [something.txt]
sast:
before_script:
- echo "hello from before sast"
- echo "this is in the file" > test.txt
artifacts:
reports:
sast: gl-sast-report.json
paths: [gl-sast-report.json, test.txt]
send-reports:
stage: upload
dependencies:
- sast
- something
before_script:
- echo "This is the send-reports before_script"
script:
- echo "in send-reports job"
- ls
artifacts:
reports:
sast: gl-sast-report.json
Three changes:
Updated code with k33g_org's suggestion
Created another artifact in the sast job (to see if it would pass through to send-reports job)
Created a new job (something) where I created a new something.txt artifact (to see if it would pass through to send-reports job)
Output:
Preparing environment
00:01
Running on runner-zqx7qoq-project-4296-concurrent-0 via e3fe672984b4...
Getting source from Git repository
Fetching changes with git depth set to 50...
Reinitialized existing Git repository in /<repo>
Checking out 26501c44 as <branch_name>...
Removing something.txt
Skipping Git submodules setup
Downloading artifacts
00:00
Downloading artifacts for something (64950)...
Downloading artifacts from coordinator... ok id=64950
responseStatus=200 OK token=zoJwysdq
Executing "step_script" stage of the job script
00:01
$ echo "This is the send-reports before_script"
This is the send-reports before_script
$ echo "in send-reports job"
in send-reports job
$ ls
...<other stuff in repo>
something.txt
Uploading artifacts for successful job
00:01
Uploading artifacts...
WARNING: gl-sast-report.json: no matching files
ERROR: No files to upload
Cleaning up file based variables
00:01
Job succeeded
Notes:
something.txt made it to this job
all artifacts from the sast job to not make it to subsequent jobs
I can only conclude that there is something internal to the sast template that is not allowing artifacts to propagate to subsequent jobs.
in the first job (sast) add this:
artifacts:
paths: [gl-sast-report.json]
reports:
sast: gl-sast-report.json
and in the next job (send-reports) add this
artifacts:
reports:
sast: gl-sast-report.json
Then you should be able to access the report in the next job (send-reports)
Instead of referencing the gl-sast-report.json artifact as sast report, reference it as a regular artifact.
So what you should do is declare the artifact this way
artifacts:
paths:
- 'gl-sast-report.json'
instead of
reports:
sast: gl-sast-report.json
I spent a full day banging my head against this, trying to access the gl-sast-report.json file generated by the built-in IaC scanner. Here's what ultimately worked for me:
First and foremost, DO NOT use this code suggested by GitLab's documentation:
include:
- template: Security/SAST-IaC.latest.gitlab-ci.yml
The above code works fine if all you want to do is scan for IaC vulnerabilities and download the report from the GitLab UI later. But who wants to do that?! I want to access the report in my next job and fail the pipeline if there are medium+ vulnerabilities in the report!
If that's what you want to do, you'll need to add all of the code from the official GitLab IaC scanner template to your pipeline, and then make some modifications. You can find the latest template code here, or use my example below.
Modified template:
# Read more about this feature here: https://docs.gitlab.com/ee/user/application_security/iac_scanning/
#
# Configure SAST with CI/CD variables (https://docs.gitlab.com/ee/ci/variables/index.html).
# List of available variables: https://docs.gitlab.com/ee/user/application_security/iac_scanning/index.html
variables:
# Setting this variable will affect all Security templates
# (SAST, Dependency Scanning, ...)
TEMPLATE_REGISTRY_HOST: 'registry.gitlab.com'
SECURE_ANALYZERS_PREFIX: "$TEMPLATE_REGISTRY_HOST/security-products"
SAST_IMAGE_SUFFIX: ""
SAST_EXCLUDED_PATHS: "spec, test, tests, tmp"
iac-sast:
stage: test
artifacts:
name: sast
paths:
- gl-sast-report.json
#reports:
# sast: gl-sast-report.json
when: always
rules:
- when: never
# `rules` must be overridden explicitly by each child job
# see https://gitlab.com/gitlab-org/gitlab/-/issues/218444
variables:
SEARCH_MAX_DEPTH: 4
allow_failure: true
script:
- /analyzer run
kics-iac-sast:
extends: iac-sast
image:
name: "$SAST_ANALYZER_IMAGE"
variables:
SAST_ANALYZER_IMAGE_TAG: 3
SAST_ANALYZER_IMAGE: "$SECURE_ANALYZERS_PREFIX/kics:$SAST_ANALYZER_IMAGE_TAG$SAST_IMAGE_SUFFIX"
rules:
- if: $SAST_DISABLED
when: never
- if: $SAST_EXCLUDED_ANALYZERS =~ /kics/
when: never
- if: $CI_COMMIT_BRANCH
Enforce Compliance:
stage: Compliance
before_script:
- apk add jq
script:
- jq -r '.vulnerabilities[] | select(.severity == "Critical") | (.severity, .message, .location, .identifiers[].url)' gl-sast-report.json > results.txt
- jq -r '.vulnerabilities[] | select(.severity == "High") | (.severity, .message, .location, .identifiers[].url)' gl-sast-report.json >> results.txt
- jq -r '.vulnerabilities[] | select(.severity == "Medium") | (.severity, .message, .location, .identifiers[].url)' gl-sast-report.json >> results.txt
- chmod u+x check-sast-results.sh
- ./check-sast-results.sh
You'll also need to make sure to add two stages to your pipeline (if you don't have them already):
stages:
# add these to whatever other stages you already have
- test
- Compliance
Note: it's extremely important that your job which is trying to access gl-sast-report.json ("Compliance" in this case) is not in the same stage as the sast scans themselves ("test" in this case). If they are, then your job will try to access the report before it exists and fail.
I'll include my shell script referenced in the pipeline in case you want to use that too:
#!/bin/sh
if [ -s results.txt ]; then
echo ""
echo ""
cat results.txt
echo ""
echo "ERROR: SAST SCAN FOUND VULNERABILITIES - FIX ALL VULNERABILITIES TO CONTINUE"
echo ""
exit 1
fi
This is a basic script that checks to see if the "results.txt" file has any contents. If it does, it exits with code 1 to break the pipeline and print the vulnerabilities. If there are no contents in the file, the script exits with code 0 and the pipeline continues (allowing you to deploy your infra). Save the file above as "check-sast-results.sh" in the root directory of your GitLab repository (the same level where ".gitlab-ci.yml" resides).
Hope this helps someone out there.
I've found this issue too, also impacts some of the other scanners; I raised an issue with GitLab to fix:
https://gitlab.com/gitlab-org/gitlab/-/issues/345696

Do not trigger build when creating new branch from Gitlab issue

I use Gitlab on gitlab.com and the issue tracker.
Each time I have an issue, I create a new branch on the button inside the issue, but this triggers a new build (pipeline) in CI.
I don't want this because this branch is coming from master and doesn't need to be built.
How can I achieve that ? Is this a gitlab-ci.yml modification or a repository related configuration ?
You can define, in which branches particular steps of your build will run, via only and except parameters to the builds: https://docs.gitlab.com/ee/ci/yaml/#only-and-except-complex
For example, run java build in all branches except issue branches:
java-build:
stage: build
except:
- /^issue-.*$/
script:
- mvn -U -e install
image: maven:3.5-jdk-8
Restrict build only to master and release branches:
java-build:
stage: build
only:
- master
- /^RELEASE-.*$/
script:
- mvn -U -e install
image: maven:3.5-jdk-8
Based on the Commit message we can avoid building for the newly created branch on GitLab CI/CD pipeline
Build Branch(master):
stage: build
only:
refs:
- master
variables:
- $CI_COMMIT_MESSAGE =~ /^\[master\].*$/
script:
- echo "master branch"
# - sleep 60 | echo "hello master"
# when: delayed
# start_in: 3 minutes
interruptible: true
Build Branch(release):
stage: build
only:
refs:
- /^build_.*$/
variables:
- $CI_COMMIT_MESSAGE =~ /^\[build\].*$/
script:
- echo "release branch"
# - sleep 60 | echo "hello release"
# when: delayed
# start_in: 3 minutes
interruptible: true
The branches will trigger when the commit message has start with [master] or [build]

Resources