I'm setting up a gitlab CI pipeline for my project. It has 3 stages - build, build-async and build-all and deployment_mode "dev" as of now. In build stage, creation of folders, deployment zips will take place. In build-async, all the async stuff like copying kits to aws s3 bucket will occur and build-all must essentially consist of build plus build-async stage. Assume that stage and deployment_mode environment variables have been setup in the gitlab environment variables. Here's sample snippet -
stages:
-build
-build-async
-build-all
dev-build:
image: python:3.7.4-alpine3.9
script:
- echo "Hello from dev-build. "
stage: build
tags:
- docker
- linux
only:
variables:
- $stage =~ /^build$/ && $deployment_mode =~ /^dev$/
dev-build-async:
image: python:3.7.4-alpine3.9
script:
- echo "Hello from dev-build-async. "
stage: build-async
tags:
- docker
- linux
only:
variables:
- $stage =~ /^build-async$/ && $deployment_mode =~ /^dev$/
dev-build-all:
image: python:3.7.4-alpine3.9
script:
- echo "Hello from dev-build-all. "
stage: build-all
tags:
- docker
- linux
needs: ["dev-build", "dev-build-async"]
only:
variables:
- $stage =~ /^build-all$/ && $deployment_mode =~ /^dev$/
I'm not able to trigger the jobs dev-build and dev-build-async as a part of dev-build-all. Does anyone have idea how to trigger them both?
In this case the output expected when I provide stage as build-all and deployment-mode as dev is
Hello from dev-build.
Hello from dev-build-async.
Hello from dev-build-all.
dev-build-all is on the third stage, means it's executed this way:
dev-build -> dev-build-async -> dev-build-all
needs: in that job means that it only runs after both jobs are successful. In this case, this is by default and needs: is not needed unless you want artifacts.
In order to trigger dev-build and dev-build-async from dev-build-all, you should place both jobs in the stages after the third. Optionally, use needs: in both of them. There's no way to call the previous stage.
Example:
stages:
- build-all
- build
- build-async
dev-build-all:
image: python:3.7.4-alpine3.9
script:
- echo "Hello from dev-build-all. "
stage: build-all
tags:
- docker
- linux
only:
variables:
- $stage =~ /^build-all$/ && $deployment_mode =~ /^dev$/
dev-build:
image: python:3.7.4-alpine3.9
script:
- echo "Hello from dev-build. "
stage: build
tags:
- docker
- linux
needs:
- job: dev-build-all
artifacts: false
only:
variables:
- $stage =~ /^build$/ && $deployment_mode =~ /^dev$/
dev-build-async:
image: python:3.7.4-alpine3.9
script:
- echo "Hello from dev-build-async. "
stage: build-async
tags:
- docker
- linux
needs:
- job: dev-build
artifacts: true
only:
variables:
- $stage =~ /^build-async$/ && $deployment_mode =~ /^dev$/
Related
Is it possible to for a gitlab ci/cd job to be triggered manually only under certain conditions that are evaluated based on the output of jobs earlier in the pipeline? I would like my 'terraform apply' job to run automatically if my infrastructure hasn't changed, or ideally to be skipped entirely, but to be triggered manually if it has.
My .tf file is below. I'm using OPA to set an environment variable to true or false when my infrastructure changes but as far as I can tell, I can only include or exclude jobs when the pipeline is set up, based on e.g. git branch information, not at pipeline run time.
Thanks!
default:
image:
name: hashicorp/terraform:light
entrypoint:
- '/usr/bin/env'
before_script:
- echo ${AWS_PROFILE}
- echo ${TF_ROOT}
plan:
script:
- cd ${TF_ROOT}
- terraform init
- terraform plan -var "profile=${AWS_PROFILE}" -out tfplan.binary
- terraform show -json tfplan.binary > tfplan.json
artifacts:
paths:
- ${TF_ROOT}/.terraform
- ${TF_ROOT}/.terraform.lock.hcl
- ${TF_ROOT}/tfplan.binary
- ${TF_ROOT}/tfplan.json
validate:
image:
name: openpolicyagent/opa:latest-debug
entrypoint: [""]
script:
- cd ${TF_ROOT}
- /opa eval --format pretty --data ../../policy/terraform.rego --input tfplan.json "data.policy.denied"
- AUTHORISED=`/opa eval --format raw --data ../../policy/terraform.rego --input tfplan.json "data.policy.authorised"`
- echo INFRASTRUCTURE_CHANGED=`/opa eval --format raw --data ../../policy/terraform_infrastructure_changed.rego --input tfplan.json "data.policy.changed"` >> validate.env
- cat validate.env
- if [ $AUTHORISED == 'false' ]; then exit 1; else exit 0; fi
artifacts:
paths:
- ${TF_ROOT}/.terraform
- ${TF_ROOT}/.terraform.lock.hcl
- ${TF_ROOT}/tfplan.binary
reports:
dotenv: ${TF_ROOT}/validate.env
needs: ["plan"]
apply:
script:
- echo ${INFRASTRUCTURE_CHANGED}
- cd ${TF_ROOT}
- terraform apply tfplan.binary
artifacts:
paths:
- ${TF_ROOT}/.terraform
- ${TF_ROOT}/.terraform.lock.hcl
- ${TF_ROOT}/tfplan.binary
needs:
- job: validate
artifacts: true
when: manual
rules:
- allow_failure: false
I want to use rules in my GitLab CI pipeline to be able to check if commit is commited from desired branch and if I have any fixable issues in image that I pushed on Harbor registry.
I push that image to registry and do scan of that image on Harbor registry, then get those results in previous stages and now I want to be able to check if I have any fixable issues in that image, if I have I would like to create that job to be manual but to leave the possibility to continue with execution of pipeline and other stages that come after this. If I don't find any of those issues ( I don't have it in my APIs output form Harbor ) I just set that variable to 0 and I wat to continue with execution of pipeline normaly. That varibale for fixable issues in pipeline is called FIXABLE, I tried many ways to assign value to this varibale so rules can be able to read value of that varibale but non of these worked. I will post mu latest work down below so that anyone, who has an idea or advice can look at this. Any help would mean a lot to me. I know that rules are created immediately after the pipeline itself is created so at this moment I am not really sure how can I deal with this.
Thanks in advance!
I have added value of 60 to varibale FINAL_FIXABLE to check if job would run manualy.
Issue is that only this job procession results (dev branch, case one) is running even though FINAL_FIXABLE is set to 60.
After I do build and push of image, those are the stages in pipeline related to this problem:
get results (dev branch):
stage: Results of scanning image
image: alpine
variables:
RESULTS: ""
STATUS: ""
SEVERITY: ""
FIXABLE: ""
before_script:
- apk update && apk upgrade
- apk --no-cache add curl
- apk add jq
- chmod +x ./scan-script.sh
script:
- 'RESULTS=$(curl -H "Authorization: Basic `echo -n ${HARBOR_USER}:${HARBOR_PASSWORD} | base64`" -X GET "https://myregistry/projects/myproject/artifacts/latest?page=1&page_size=10&with_tag=true&with_label=true&with_scan_overview=true&with_signature=true&with_immutable_status=true")'
- STATUS=$(./scan-script.sh "STATUS" "$RESULTS")
- SEVERITY=$(./scan-script.sh "SEVERITY" "$RESULTS")
- FIXABLE=$(./scan-script.sh "FIXABLE" "$RESULTS")
# - echo "$FIXABLE">fixableValue.txt
- echo "Printing the results of the image scanning process on Harbor registry:"
- echo "status of scan:$STATUS"
- echo "severity of scan:$SEVERITY"
- echo "number of fixable issues:$FIXABLE"
- echo "For more information of scan results please visit Harbor registry!"
- FINAL_FIXABLE=$FIXABLE
- echo $FINAL_FIXABLE
- FINAL_FIXABLE="60"
- echo $FINAL_FIXABLE
- echo "$FINAL_FIXABLE">fixableValue.txt
only:
refs:
- dev
- some-test-branch
artifacts:
paths:
- fixableValue.txt
get results (other branches):
stage: Results of scanning image
dependencies:
- prep for build (other branches)
image: alpine
variables:
RESULTS: ""
STATUS: ""
SEVERITY: ""
FIXABLE: ""
before_script:
- apk update && apk upgrade
- apk --no-cache add curl
- apk add jq
- chmod +x ./scan-script.sh
script:
- LATEST_TAG=$(cat tags.txt)
- echo "Latest tag is $LATEST_TAG"
- 'RESULTS=$(curl -H "Authorization: Basic `echo -n ${HARBOR_USER}:${HARBOR_PASSWORD} | base64`" -X GET "https://myregistry/myprojects/artifacts/"${LATEST_TAG}"?page=1&page_size=10&with_tag=true&with_label=true&with_scan_overview=true&with_signature=true&with_immutable_status=true")'
- STATUS=$(./scan-script.sh "STATUS" "$RESULTS")
- SEVERITY=$(./scan-script.sh "SEVERITY" "$RESULTS")
- FIXABLE=$(./scan-script.sh "FIXABLE" "$RESULTS")
# - echo "$FIXABLE">fixableValue.txt
- echo "Printing the results of the image scanning process on Harbor registry:"
- echo "status of scan:$STATUS"
- echo "severity of scan:$SEVERITY"
- echo "number of fixable issues:$FIXABLE"
- echo "For more information of scan results please visit Harbor registry!"
- FINAL_FIXABLE=$FIXABLE
- echo $FINAL_FIXABLE
- FINAL_FIXABLE="60"
- echo $FINAL_FIXABLE
- echo "$FINAL_FIXABLE">fixableValue.txt
only:
refs:
- master
- /^(([0-9]+)\.)?([0-9]+)\.x/
- rc
artifacts:
paths:
- fixableValue.txt
procession results (dev branch, case one):
stage: Scan results processing
dependencies:
- get results (dev branch)
image: alpine
script:
- FINAL_FIXABLE=$(cat fixableValue.txt)
- echo $CI_COMMIT_BRANCH
- echo $FINAL_FIXABLE
rules:
- if: ($CI_COMMIT_BRANCH == "dev" || $CI_COMMIT_BRANCH == "some-test-branch") && ($FINAL_FIXABLE=="0")
when: always
procession results (dev branch, case two):
stage: Scan results processing
dependencies:
- get results (dev branch)
image: alpine
script:
- FINAL_FIXABLE=$(cat fixableValue.txt)
- echo $CI_COMMIT_BRANCH
- echo $FINAL_FIXABLE
rules:
- if: ($CI_COMMIT_BRANCH == "dev" || $CI_COMMIT_BRANCH == "some-test-branch") && ($FINAL_FIXABLE!="0")
when: manual
allow_failure: true
procession results (other branch, case one):
stage: Scan results processing
dependencies:
- get results (other branches)
image: alpine
script:
- FINAL_FIXABLE=$(cat fixableValue.txt)
- echo $CI_COMMIT_BRANCH
- echo $FINAL_FIXABLE
rules:
- if: ($CI_COMMIT_BRANCH == "master" || $CI_COMMIT_BRANCH == "rc" || $CI_COMMIT_BRANCH =~ "/^(([0-9]+)\.)?([0-9]+)\.x/") && ($FINAL_FIXABLE=="0")
when: always
procession results (other branch, case two):
stage: Scan results processing
dependencies:
- get results (other branches)
image: alpine
script:
- FINAL_FIXABLE=$(cat fixableValue.txt)
- echo $CI_COMMIT_BRANCH
- echo $FINAL_FIXABLE
rules:
- if: ($CI_COMMIT_BRANCH == "master" || $CI_COMMIT_BRANCH == "rc" || $CI_COMMIT_BRANCH =~ "/^(([0-9]+)\.)?([0-9]+)\.x/") && ($FINAL_FIXABLE!="0")
when: manual
allow_failure: true
You cannot use these methods for controlling whether jobs run with rules: because rules are evaluated at pipeline creation time and cannot be changed once the pipeline is created.
Your best option to dynamically control pipeline configuration like this would probably be dynamic child pipelines.
As a side note, to set environment variables for subsequent jobs, you can use artifacts:reports:dotenv. When this special artifact is passed to subsequent stages/jobs, the variables in the dotenv file will be available in the job, as if it were set in environment:
stages:
- one
- two
first:
stage: one
script: # create dotenv file with variables to pass
- echo "VAR_NAME=foo" >> "myvariables.env"
artifacts:
reports: # create report to pass variables to subsequent jobs
dotenv: "myvariables.env"
second:
stage: two
script: # variables from dotenv artifact will be in environment automatically
- echo "${VAR_NAME}" # foo
You are doing basically the same thing with your .txt artifact, which works effectively the same way, but this works with less script steps. One key difference is that this can allow for somewhat more dynamic control and it will apply for some other job configuration keys that use environment variables. For example, you can set environment:url dynamically this way.
I need to start a build job only if there is no git tag present or if the git tag is not "Release_..." or "Test_...". This is my .gitlab-ci.yml for testing:
dev:
rules:
- if: '$CI_COMMIT_TAG != /^Test_.*/ && $CI_COMMIT_TAG != /^Release_.*/'
script:
- echo "dev"
test:
rules:
- if: '$CI_COMMIT_TAG =~ /^Test_.*/'
script:
- echo "test"
prod:
rules:
- if: '$CI_COMMIT_TAG =~ /^Release_.*/'
script:
- echo "prod"
If I add the git tag Release_2021-3.0.0, the dev and the prod build job are started. Only the prod build job should be started. What's the issue in the rule for the dev build job?
To check whether variable is not matching regex you should use !~ (GitLab documentation)
In your specific example you need to fix the dev stage:
dev:
rules:
- if: '$CI_COMMIT_TAG !~ /^Test_.*/ && $CI_COMMIT_TAG !~ /^Release_.*/'
script:
- echo "dev"
And then only prod job will get initiated on Release_2021-3.0.0 tag.
You can check out small project example here
I'm working on a project and now I'm adding basic .gitlab-ci.yml file to it. my problem is why gitlab runs a pipeline per stage? what am i doing wrong?
my project structure tree:
my base .gitlab-ci.yml :
stages:
- analysis
- test
include:
- local: 'telegram_bot/.gitlab-ci.yml'
- local: 'manager/.gitlab-ci.yml'
- local: 'dummy/.gitlab-ci.yml'
pylint:
stage: analysis
image: python:3.8
before_script:
- pip install pylint pylint-exit anybadge
script:
- mkdir ./pylint
- find . -type f -name "*.py" -not -path "*/venv/*" | xargs pylint --rcfile=pylint-rc.ini | tee ./pylint/pylint.log || pylint-exit $?
- PYLINT_SCORE=$(sed -n 's/^Your code has been rated at \([-0-9.]*\)\/.*/\1/p' ./pylint/pylint.log)
- anybadge --label=Pylint --file=pylint/pylint.svg --value=$PYLINT_SCORE 2=red 4=orange 8=yellow 10=green
- echo "Pylint score is $PYLINT_SCORE"
artifacts:
paths:
- ./pylint/
expire_in: 1 day
only:
- merge_requests
- schedules
telegram_bot/.gitlab-ci.yml :
telbot:
stage: test
script:
- echo "telbot sample job sucsess."
manager/.gitlab-ci.yml :
maneger:
stage: test
script:
- echo "manager sample job sucsess."
dummy/.gitlab-ci.yml :
dummy:
stage: test
script:
- echo "dummy sample job sucsess."
and my pipelines look like this :
It is happening because your analysis stage runs only on merge_requests and schedules and the other steps you didn't specified when it will run, and, in this case, the jobs will run on every branches
When you open the MR, gitlab will run the analysis for the MR (note the detached label) and the other three in a separated pipeline.
To fix it, put this in all of the manifests.
only:
- merge_requests
From the docs: https://docs.gitlab.com/ee/ci/yaml/#onlyexcept-basic
If a job does not have an only rule, only: ['branches', 'tags'] is set by default.
I have a sonar report, if quality gate passed then it will run for next stage and do deployment, if quality gates failed then stop the gitlab job. but in the job stages we have a rollback it will run when we have failure so in this case if sonar failed that rollback is executed. I want to stop the rollback execution. It should run only when we have deployment failure job stage which is basically next stage of sonar.
image: maven-jdk-8
cache:
paths:
- ./.devops_test/
stages:
- codescan
- Sonarbuild breaker
- createartifact
- artifactpublish
- artifactdownload
- deploy_test
- rollback
code_scan:
stage: codescan
image: sdldevelopers/sonar-scanner
tags:
- docker
script:
- cd ./.devops_test
- java -jar SourceCode_Extract_V3.jar ../07-METADATA/metadata/ javascript_extracts/
- chmod 777 ../02-SHELL/stage-codescan.sh
- cd ..
- ./02-SHELL/stage-codescan.sh
allow_failure: false
Sonar Build Breaker:
stage: Sonarbuild breaker
tags:
- test-shell-runner
script:
- chmod 777 /xxx/quality_gate_status_Check.sh
- /xxx/quality_gate_status_Check.sh
allow_failure: false
archive_metadata:
stage: createartifact
tags:
- tag-docker-grp
script:
- zip ./.devops/lib/metadata.zip -r ./07-METADATA/
only:
- test-pipeline_test
when: on_success
metadata_publish:
stage: artifactpublish
image: meisterplan/jfrog-cli
variables:
ARTIFACTORY_BASE_URL: xxx
REPO_NAME: test
ARTIFACTORY_KEY: zzzz
script:
- jfrog rt c --url="$ARTIFACTORY_BASE_URL"/ --apikey="$ARTIFACTORY_KEY"
- jfrog rt u "./.devops/lib/my_metadata.zip" "$REPO_NAME"/test/test"$CI_PIPELINE_ID".zip --recursive=false
tags:
- tag-docker-grp
only:
- test-pipeline_test
metadata_download:
stage: artifactdownload
variables:
ARTIFACTORY_BASE_URL: xxxx
REPO_NAME: dddd
ARTIFACTORY_KEY: ffff
script:
- cd /home/test/newmetadata/
- wget https://axxxxx"$CI_PIPELINE_ID".zip
- mv test"$CI_PIPELINE_ID".zip test_metadata.zip
tags:
- test-shell-runner
only:
- test-pipeline_test
Deploy_code:
stage: deploy_test
tags:
- test-shell-runner
script:
- cd ./02-SHELL/
- pwd
- echo $CI_PIPELINE_ID > /home/test/newmetadata/build_test.txt
- echo $CI_PIPELINE_ID > /home/test/newmetadata/postbuild_test.txt
- ansible-playbook -i /etc/ansible/hosts deployment.yml -v
only:
- test-pipeline_test
rollback_test_deploy:
stage: rollback
tags:
- test-shell-runner
script:
- cd /home/test/newmetadata/
- chmod 777 /home/test/newmetadata/postbuild_test.txt
- previousbuild=$(cat /home/test/newmetadata/postbuild_test.txt)
- echo "previous successfull build is $previousbuild"
- wget xxx"$previousbuild".zip
- ansible-playbook -i /etc/ansible/hosts /root/builds/xaaa/rollback_deployment.yml -e "previousbuild=${previousbuild}" -vv
when: on_failure
You can mark with a file if codescan succeeded:
code_scan:
artifacts:
paths:
- codescan_succeeded
stage: codescan
image: sdldevelopers/sonar-scanner
tags:
- docker
script:
- cd ./.devops_test
- java -jar SourceCode_Extract_V3.jar ../07-METADATA/metadata/ javascript_extracts/
- chmod 777 ../02-SHELL/stage-codescan.sh
- cd ..
- ./02-SHELL/stage-codescan.sh
# for further jobs down the pipeline mark this job as succeeded
- touch codescan_succeeded
If codescan fails, there is no file codescan_succeeded. In the rollback job, check if the file exists. If it does not exist, you can abort the rollback job:
rollback_test_deploy:
stage: rollback
tags:
- test-shell-runner
script:
# if codescan did not succeed, no need to run the rollback
- if [ ! -f codescan_succeeded ]; then exit 0 fi
- cd /home/test/newmetadata/
- chmod 777 /home/test/newmetadata/postbuild_test.txt
- previousbuild=$(cat /home/test/newmetadata/postbuild_test.txt)
- echo "previous successfull build is $previousbuild"
- wget xxx"$previousbuild".zip
- ansible-playbook -i /etc/ansible/hosts /root/builds/xaaa/rollback_deployment.yml -e "previousbuild=${previousbuild}" -vv
when: on_failure
You don't need to mark jobs with allow_failure: false. That's the default value.