Gitlab.ci skip job if changes are only in specific files inside a relevant folder - gitlab

I have this gitlab-ci.yml:
build-docker:
stage: build-docker
rules:
- if: '$CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH'
changes:
- app/Dockerfile
- app/requirements.txt
when: always
- when: manual
allow_failure: true
image:
name: alpine
entrypoint: [""]
script:
- echo 'Git Pulling, building and restarting'
deploy:
stage: deploy
rules:
- if: '$CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH'
changes:
- app/**/*
when: always
- when: manual
allow_failure: true
image:
name: alpine
entrypoint: [""]
script:
- echo 'Git Pulling and restarting'
My problem is that I doesn't need to run deploy if the changed files are only app/Dockerfile and/or app/requirements.txt (because the build job already ran and does the same as the deploy stage, and more), but I need it to run if changes happen on any other file inside app folder.
I already tried this in the deploy stage:
- if: '$CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH'
changes:
- "app/!(Dockerfile)"
- "app/!(requirements.txt)"
- app/**/*
when: always
- when: manual
allow_failure: true
But this doesn't work as expected.

Related

Gitlab CI execute SAST jobs only when merging branch to master

Hello I'm trying to figure out how to run SAST jobs only when merging branch into master because they last 5 minutes and they are being run in every push for any branch.
This means that every time someone makes a push to their MR branch the security stage is executed with all SAST jobs.
What I want to achieve is that SAST jobs are executed when the branch is going to be merged to master.
gitlab-ci.yml:
include:
- template: Jobs/SAST.gitlab-ci.yml
stages:
- security
- tests
my_tests:
stage: tests
script:
- echo Running tests ...
sast:
stage: security
What I tried so far is using:
sast:
stage: security
only:
- master
But it fails because the included template Jobs/SAST.gitlab-ci.yml already uses rules and rules with only/except can't be used together.
jobs:sast config key may not be used with rules: only
In the sourcecode Jobs/SAST.gitlab-ci.yml does not use except but rules which also are incompatible with only.
But you could also switch to rules syntax:
sast:
stage: security
rules:
- if: "$CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH"
when: always
That should do the trick
You can use this configuration:
include:
- template: Jobs/SAST.gitlab-ci.yml
phpcs-security-audit-sast:
rules:
- if: $CI_PIPELINE_SOURCE == 'merge_request_event'
exists:
- '**/*.php'
semgrep-sast:
rules:
- if: $CI_PIPELINE_SOURCE == 'merge_request_event'
exists:
- '**/*.go'
- '**/*.html'
- '**/*.js'
- '**/*.jsx'
- '**/*.ts'
- '**/*.tsx'
gosec-sast:
rules:
- if: $CI_PIPELINE_SOURCE == 'merge_request_event'
exists:
- '**/*.go'
nodejs-scan-sast:
rules:
- if: $CI_PIPELINE_SOURCE == 'merge_request_event'
exists:
- '**/package.json'
bandit-sast:
rules:
- if: $CI_PIPELINE_SOURCE == 'merge_request_event'
exists:
- '**/*.py'
flawfinder-sast:
rules:
- if: $CI_PIPELINE_SOURCE == 'merge_request_event'
exists:
- '**/*.c'
- '**/*.cpp'
eslint-sast:
rules:
- if: $CI_PIPELINE_SOURCE == 'merge_request_event'
exists:
- '**/*.html'
- '**/*.js'
- '**/*.jsx'
- '**/*.ts'
- '**/*.tsx'
spotbugs-sast:
rules:
- if: $CI_PIPELINE_SOURCE == 'merge_request_event'
exists:
- '**/*.groovy'
- '**/*.java'
- '**/*.scala'
- '**/*.kt'

Trigger and wait in gitlab pipeline

I have 3 projects in Gitlab.
Frontend
Backend
Deployment
Each project has separate pipeline definition which has CI pipeline and using Multi-project pipeline concept, it will invoke the deployment project pipeline for deploying each module.
Frontend
image: frontend:runner1.1
# Cache modules in between jobs
cache:
key: ${CI_COMMIT_REF_SLUG}
paths:
- .npm/
variables:
GIT_SUBMODULE_STRATEGY: recursive
JAVA_OPTS: "Dlog4j.formatMsgNoLookups=true"
LOG4J_FORMAT_MSG_NO_LOOKUPS: "true"
stages:
- VersionCheck
- Static Analysis
- Test
- SonarQube
- Tag
- Version
- Build
- Deploy
.build:
stage: Build
image: google/cloud-sdk
services:
- docker:dind
before_script:
- mkdir -p $HOME/.docker && echo $DOCKER_AUTH_CONFIG > $HOME/.docker/config.json
script:
- echo $CI_REGISTRY_USER
- echo $CI_REGISTRY
- echo ${IMAGE_TAG}
- docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY
- docker build -t $IMAGE_TAG . --build-arg REACT_APP_ENV=${CI_COMMIT_BRANCH} --build-arg REACT_APP_BACKEND_API=${REACT_APP_BACKEND_API} --build-arg REACT_APP_GOOGLE_CLIENT_ID=${REACT_APP_GOOGLE_CLIENT_ID}
- docker push $IMAGE_TAG
VersionCheck:
stage: VersionCheck
allow_failure: false
rules:
- if: '$CI_PIPELINE_SOURCE == "merge_request_event" && $CI_MERGE_REQUEST_TARGET_BRANCH_NAME == "sandbox"'
before_script:
- git fetch origin $CI_MERGE_REQUEST_TARGET_BRANCH_NAME:$CI_MERGE_REQUEST_TARGET_BRANCH_NAME
- git fetch origin $CI_MERGE_REQUEST_SOURCE_BRANCH_NAME:$CI_MERGE_REQUEST_SOURCE_BRANCH_NAME
script:
- deployed_version=`git show $CI_MERGE_REQUEST_TARGET_BRANCH_NAME:package.json | sed -nE 's/^\\s*\"version\"\:\ \"(.*?)\",$/\\1/p'`
- new_version=`git show $CI_MERGE_REQUEST_SOURCE_BRANCH_NAME:package.json | sed -nE 's/^\\s*\"version\"\:\ \"(.*?)\",$/\\1/p'`
- >
echo "sandbox version: $deployed_version"
echo "feature version: $new_version"
if [ "$(printf '%s\n' "$deployed_version" "$new_version" | sort -V | head -n1)" = "$deployed_version" ]; then
echo "version is incremented"
else
echo "Version need to be incremented on the feature branch. See the README.md file"
exit 1
fi
eslint:
stage: Static Analysis
allow_failure: false
before_script:
- npm ci --cache .npm --prefer-offline
script:
- echo "Start building App"
- npm install
- npm run eslint-report
- echo "Build successfully!"
artifacts:
reports:
junit: coverage/eslint-report.xml
paths:
- coverage/eslint-report.json
test:
stage: Test
allow_failure: false
rules:
- if: '$CI_PIPELINE_SOURCE == "merge_request_event"'
when: always
- when: always
before_script:
- npm ci --cache .npm --prefer-offline
script:
- echo "Testing App"
- npm install
- npm run generate-tests-report
- echo "Test successfully!"
artifacts:
reports:
junit: junit.xml
paths:
- test-report.xml
- coverage/lcov.info
sonarqube-check:
stage: SonarQube
allow_failure: true
image:
name: sonarsource/sonar-scanner-cli:4.6
entrypoint: [""]
variables:
SONAR_USER_HOME: "${CI_PROJECT_DIR}/.sonar" # Defines the location of the analysis task cache
GIT_DEPTH: "0" # Tells git to fetch all the branches of the project, required by the analysis task
cache:
key: "${CI_JOB_NAME}"
paths:
- .sonar/cache
script:
# wait for the quality results, true/false
- sonar-scanner -X -Dsonar.qualitygate.wait=false -Dsonar.branch.name=$CI_COMMIT_BRANCH -Dsonar.login=$SONAR_TOKEN -Dsonar.projectVersion=$(npm run print-version --silent)
only:
- merge_requests
- release
- master
- develop
- sandbox
Version:
stage: Version
allow_failure: false
only:
- sandbox
- develop
- release
- master
script:
- VERSION=`sed -nE 's/^\\s*\"version\"\:\ \"(.*?)\",$/\\1/p' package.json`
- echo "VERSION=$CI_COMMIT_REF_SLUG$VERSION" >> build.env
artifacts:
reports:
dotenv: build.env
build:sb:
stage: Build
allow_failure: false
environment:
name: sb
variables:
IMAGE_TAG: $CI_REGISTRY_IMAGE:$VERSION
TF_ENV: "sb"
extends:
- .build
only:
- sandbox
dependencies:
- Version
build:dev:
stage: Build
allow_failure: false
environment:
name: dev
variables:
IMAGE_TAG: $CI_REGISTRY_IMAGE:$VERSION
TF_ENV: "dev"
extends:
- .build
only:
- develop
dependencies:
- Version
build:qa:
stage: Build
allow_failure: false
environment:
name: qa
variables:
IMAGE_TAG: $CI_REGISTRY_IMAGE:$VERSION
TF_ENV: "qa"
extends:
- .build
only:
- release
dependencies:
- Version
build:prod:
stage: Build
allow_failure: false
environment:
name: prod
variables:
IMAGE_TAG: $CI_REGISTRY_IMAGE:$VERSION
TF_ENV: "prod"
extends:
- .build
only:
- master
dependencies:
- Version
deployment:sandbox:
rules:
- if: "$CI_COMMIT_BRANCH =~ /^feature/"
when: never
- if: $CI_COMMIT_BRANCH == "sandbox"
variables:
TF_ENV: "sb"
MODULE: "frontend"
VERSION: $VERSION
stage: Deploy
allow_failure: false
trigger:
project: in-silico-prediction/isp/isp-deployment
strategy: depend
needs:
- job: Version
artifacts: true
- job: build:sb
artifacts: false
deployment:dev:
rules:
- if: "$CI_COMMIT_BRANCH =~ /^feature/"
when: never
- if: $CI_COMMIT_BRANCH == "develop"
variables:
TF_ENV: "dev"
MODULE: "frontend"
VERSION: $VERSION
stage: Deploy
allow_failure: false
trigger:
project: deployment
strategy: depend
needs:
- job: Version
artifacts: true
- job: build:dev
artifacts: false
deployment:qa:
rules:
- if: "$CI_COMMIT_BRANCH =~ /^feature/"
when: never
- if: $CI_COMMIT_BRANCH == "release"
variables:
TF_ENV: "qa"
MODULE: "frontend"
VERSION: $VERSION
stage: Deploy
allow_failure: false
trigger:
project: deployment
strategy: depend
needs:
- job: Version
artifacts: true
- job: build:qa
artifacts: false
deployment:prod:
rules:
- if: "$CI_COMMIT_BRANCH =~ /^feature/"
when: never
- if: $CI_COMMIT_BRANCH == "master"
variables:
TF_ENV: "prod"
MODULE: "frontend"
VERSION: $VERSION
stage: Deploy
allow_failure: false
trigger:
project: deployment
strategy: depend
needs:
- job: Version
artifacts: true
- job: build:prod
artifacts: false
The deployment stage will invoke downstream project. The backend project also have same pipeline definition. Now both Frontend and Backend project will trigger the deployment project independently.
The deployment project should wait for the trigger from both project and run only 1 time which deploys both frontend and backend in single run into the environment.
For Merge train, is it possible to configure 2 different project merge request
As long as those MRs are from the same project, yes, the all idea of merge train is to, as in this example, list multiple MRs and combine them.
However, that would trigger the FE (FrontEnd) deployment, then it would trigger FE and BE deployments, which is not what you want.
I would rather use a scheduled cron job which detects when both FE and BE have been deployed, for instance by query the the date of their respective published images (latest FE and BE in the registry): if that date is more recent than the latest completed deployment, both for FE and BE build, then the deployment cron job would trigger an actual and full deployment.

rules:changes always evaluates as true in MR pipeline

I have a monorepo where each package should be built as a docker image.
I created a trigger job for each package that triggers a child pipeline.
In the MR, my changes rule is being ignored and all child pipelines are triggered.
.gitlab-ci.yml
---
workflow:
rules:
- if: $CI_MERGE_REQUEST_ID || $CI_COMMIT_BRANCH
trigger-package-a:
stage: build
trigger:
include: .gitlab/ci/packages/package-gitlab-ci.yml
strategy: depend
rules:
- changes:
- "packages/package-a/**/*"
variables:
PACKAGE: package-a
trigger-package-b:
stage: build
trigger:
include: .gitlab/ci/packages/package-gitlab-ci.yml
strategy: depend
rules:
- changes:
- "packages/package-b/**/*"
variables:
PACKAGE: package-b
done_job:
stage: deploy
script:
- "echo DONE"
- "cat config.json"
stages:
- build
- deploy
package-gitlab-ci.yml
workflow:
rules:
- if: $CI_MERGE_REQUEST_ID
- changes:
- "packages/${PACKAGE}/**/*"
stages:
- bootstrap
- validate
cache:
key: "${PACKAGE}_${CI_COMMIT_REF_SLUG}"
paths:
- packages/${PACKAGE}/node_modules/
policy: pull
install-package:
stage: bootstrap
script:
- echo ${PACKAGE}}
- echo '{"package":${PACKAGE}}' > config.json
- "cd packages/${PACKAGE}/"
- yarn install --frozen-lockfile
artifacts:
paths:
- config.json
cache:
key: "${PACKAGE}_${CI_COMMIT_REF_SLUG}"
paths:
- packages/${PACKAGE}/node_modules/
policy: pull-push
lint-package:
script:
- yarn lint
stage: validate
needs: [install-package]
before_script:
- "cd packages/${PACKAGE}/"
test-package:
stage: validate
needs: [lint-package]
before_script:
- "echo working on ${PACKAGE}"
- "cd packages/${PACKAGE}/"
rules:
- if: $CI_MERGE_REQUEST_ID
script:
- yarn test
It looks like your downstream pipeline is defining a workflow with 2 independent rules: if and changes. This may cause the jobs to be included if the first condition in the if is met, i.e. if it is a MR pipeline. Try removing the dash in front of changes, as in the example here, to treat this as a single rule:
workflow:
rules:
- if: $CI_MERGE_REQUEST_ID
changes:
- "packages/${PACKAGE}/**/*"
EDIT: This recent issue states rules:changes does not work as expected with trigger. So you may actually need to remove the changes from the upstream pipeline and solve this in the downstream pipeline.
Side note, not directly related to your issue: the GitLab Docs provide a workflow template to run branch or MR pipelines without creating duplicates. You can use this in your upstream pipeline if it helps:
workflow:
rules:
- if: '$CI_PIPELINE_SOURCE == "merge_request_event"'
- if: '$CI_COMMIT_BRANCH && $CI_OPEN_MERGE_REQUESTS'
when: never
- if: '$CI_COMMIT_BRANCH'

GITLAB CI pipeline, run job only with git tag

need help from GitLab gurus. I have a following pipeline below.
I expect "sync_s3:prod" job will run only when i will push new git tag. But gitlab trigger both
jobs. Why its behaving like this ? I create $git_commit_tag rule only for one job. Any ideas?
stages:
- sync:nonprod
- sync:prod
.sync_s3:
image:
name: image
entrypoint: [""]
script:
- aws configure set region eu-west-1
- aws s3 sync ${FOLDER_ENV} s3://img-${AWS_ENV} --delete
sync_s3:prod:
stage: sync:prod
rules:
- if: $CI_COMMIT_TAG
changes:
- prod/*
extends: .sync_s3
variables:
AWS_ENV: prod
FOLDER_ENV: prod/
tags:
- gaming_prod
sync_s3:nonprod:
stage: sync:nonprod
rules:
- changes:
- pp2/*
extends: .sync_s3
variables:
AWS_ENV: nonprod
FOLDER_ENV: pp2/
tags:
- gaming_nonprod
If I understand the question correctly, you do not want to have the sync_s3:nonprod job run if the sync_s3:prod is run. (?)
To achieve this, on the sync_s3:nonprod job you should be able to copy the same rule from sync_s3:prod together with when: never:
stages:
- sync:nonprod
- sync:prod
.sync_s3:
image:
name: image
entrypoint: [""]
script:
- aws configure set region eu-west-1
- aws s3 sync ${FOLDER_ENV} s3://img-${AWS_ENV} --delete
sync_s3:prod:
stage: sync:prod
rules:
- if: $CI_COMMIT_TAG
changes:
- prod/*
extends: .sync_s3
variables:
AWS_ENV: prod
FOLDER_ENV: prod/
tags:
- gaming_prod
sync_s3:nonprod:
stage: sync:nonprod
rules:
- if: $CI_COMMIT_TAG
changes:
- prod/*
when: never
- changes:
- pp2/*
extends: .sync_s3
variables:
AWS_ENV: nonprod
FOLDER_ENV: pp2/
tags:
- gaming_nonprod
As #slauth already mentions in his answer the rules need to be adjusted per step of the pipeline. I only post this as an answer as an addition to the original answer above.
In order to prevent pipeline steps from running when a git-tag is present you need to explicitly set the rule for the corresponding job.
stages:
- sync:nonprod
- sync:prod
.sync_s3:
image:
name: image
entrypoint: [""]
script:
- aws configure set region eu-west-1
- aws s3 sync ${FOLDER_ENV} s3://img-${AWS_ENV} --delete
sync_s3:prod:
stage: sync:prod
rules:
- if: $CI_COMMIT_TAG
changes:
- prod/*
extends: .sync_s3
variables:
AWS_ENV: prod
FOLDER_ENV: prod/
tags:
- gaming_prod
sync_s3:nonprod:
stage: sync:nonprod
rules:
- changes:
- pp2/*
- if: $CI_COMMIT_TAG
when: never
extends: .sync_s3
variables:
AWS_ENV: nonprod
FOLDER_ENV: pp2/
tags:
- gaming_nonprod
For further clarification:
The following rule will evaluate similar to a logic AND, so this will evaluate to true if there is a $CI_COMMIT_TAG AND there are changes in prod/*. So only when both conditions are met this will be added to the pipeline.
rules:
- if: $CI_COMMIT_TAG
changes:
- prod/*

GitLab: chosen stage does not exist

I am trying to put together a fairly complex pipeline with several jobs that run sequentially in our different environments. This is to run our Terraform changes across our infra. The sequence of jobs should run automatically across our infraci environment which is only ever rolled out to via CI, then stop and require a button click to start the deployment to our dev environment which has actual (albeit dev) users. Of course I don't want to write the same code over and over again so I've tried to be as DRY as possible. Here is my gitlab-ci.yml:
---
# "variables" & "default" are used by all jobs
variables:
TF_ROOT: '${CI_PROJECT_DIR}/terraform'
TF_CLI_CONFIG_FILE: .terraformrc
AWS_STS_REGIONAL_ENDPOINTS: regional
AWS_DEFAULT_REGION: eu-west-2
ASG_MODULE_PATH: module.aws_asg.aws_autoscaling_group.main_asg
default:
image:
name: hashicorp/terraform:light
entrypoint: ['']
cache:
paths:
- ${TF_ROOT}/.terraform
tags:
- nonlive # This tag matches the group wide GitLab runner.
before_script:
- cd ${TF_ROOT}
# List of all stages (jobs within the same stage are executed concurrently)
stages:
- init
- infraci_plan
- infraci_taint
- infraci_apply
- dev_plan
- dev_taint
- dev_apply
# "Hidden" jobs we use as templates to improve code reuse.
.default:
rules:
- if: $CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH
.plan:
extends: .default
stage: ${CI_ENVIRONMENT_NAME}_plan
script:
- terraform workspace select ${CI_ENVIRONMENT_NAME}
- terraform plan
rules:
- if: '$CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH && $CI_ENVIRONMENT_NAME != "infraci"'
when: manual
allow_failure: false
.taint:
extends: .default
stage: ${CI_ENVIRONMENT_NAME}_taint
script: terrafrom taint ${ASG_MODULE_PATH}
needs:
- ${CI_ENVIRONMENT_NAME}_plan
.apply:
extends: .default
stage: ${CI_ENVIRONMENT_NAME}_apply
script: terraform apply -auto-approve
# Create actual jobs
## init - runs once per pipeline
init:
stage: init
script: terraform init
rules:
- if: $CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH
when: always
- if: '$CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH && $CI_PIPELINE_SOURCE == "web"'
when: manual
## infraci - auto deploy
infraci_plan:
extends: .plan
environment:
name: infraci
infraci_taint:
extends: .taint
environment:
name: infraci
infraci_apply:
extends: .apply
environment:
name: infraci
## dev - manual deployment
dev_plan:
extends: .plan
environment:
name: dev
dev_taint:
extends: .taint
environment:
name: dev
dev_apply:
extends: .apply
environment:
name: dev
Unfortunately this fails validation with the following error:
infraci_plan job: chosen stage does not exist; available stages are .pre, init, infraci_plan, infraci_taint, infraci_apply, dev_plan, dev_taint, dev_apply, .post
My assumption is that it's to do with interpolating CI_ENVIRONMENT_NAME in the hidden jobs but not actually setting the value until the jobs where the jobs are actually defined.
If that's the case though what's a way to get the setup I need without a severe amount of duplication?
You are right, it is not possible to use a variable in stage. The only way I to see you need to define the stage directly in your job and remove the stage in .plan.
infraci_plan:
extends: .plan
stage: infraci_plan
environment:
name: infraci

Resources