How can I skip a job based on the changes made in .gitlab-ci.yml? - gitlab

I have two jobs in GitLab CI/CD: build-job (which takes ~10 minutes) and deploy-job (which takes ~2 minutes). In a pipeline, build-job succeeded while deploy-job failed because of a typo in the script. How can I fix this typo and run only deploy-job, instead of rerunning build-job?
My current .gitlab-ci.yml is as follows:
stages:
- build
- deploy
build-job:
image: ...
stage: build
only:
refs:
- main
changes:
- .gitlab-ci.yml
- pubspec.yaml
- test/**/*
- lib/**/*
script:
- ...
artifacts:
paths:
- ...
expire_in: 1 hour
deploy-job:
image: ...
stage: deploy
dependencies:
- build-job
only:
refs:
- main
changes:
- .gitlab-ci.yml
- pubspec.yaml
- test/**/*
- lib/**/*
script:
- ...
I imagine something like:
the triggerer (me) fixes the typo in deploy-job's script and pushes the changes
the pipeline is triggered
the runner looks at the files described by build-job/only/changes and detects a single change in the .gitlab-ci.yml file
the runner looks at the .gitlab-ci.yml file and detects there IS a change, but since this change does not belong to the build-job section AND the job previously succeeded, this job is skipped
the runner looks at the files described by deploy-job/only/changes and detects a single change in the .gitlab-ci.yml file
the runner looks at the .gitlab-ci.yml file and detects there is a change, and since this change belongs to the deploy-job section, this job is executed
This way, only deploy-job is executed. Can this be done, either using rules or only?

There is a straight foward way, you just have to have access to gitlab CI interface.
Push and let the pipeline start normally.
Cancel the fist job.
Run the desired one (deploy).
Another way is to create a new job, intended to be runned only when needed, in which you can set a non-dependency deploy. Just copy the current deploy code without the dependencies. Also add the manual mode with
job:
script: echo "Hello, Rules!"
rules:
- if: $CI_PIPELINE_SOURCE == "merge_request_event"
when: manual
allow_failure: true
As docs.

Related

How run a particular stage in GitLab after the execution of child pipelines'?

I'm using GitLab and the CI config has got following stages:-
stages:
- test
- child_pipeline1
- child_pipeline2
- promote-test-reports
At any time, only one of the child pipeline will run i.e. either child_pipeline1 or child_pipeline2 after the test stage and not both at a time.
Now, I have added another stage called promote-test-reports which I would like to run in the end i.e. after the successful execution of any of child pipeline.
But I'm completely blocked here. This promote-test-reports is coming from a template which I have included in this main CI config file like:-
# Include the template file
include:
- project: arcesium/internal/vteams/commons/commons-helper
ref: promote-child-artifacts
file: 'templates/promote-child-artifacts.yml'
I'm overriding the GitLab project token in this same main file like below:-
test:
stage: promote-test-reports
trigger:
include: adapter/child_pipelin1.yml
strategy: depend
variables:
GITLAB_PRIVATE_TOKEN: $GITLAB_TOKEN
If you see the above stage definition in main CI config file, I'm trying to use strategy: depend to wait for successful execution of child_pipeline1 and then run this stage but it throwing error (jobs:test config contains unknown keys: trigger)and this approach is not working reason is I'm using scripts in the main definition of this stage (promote-test-reports) in the template and as per the documentation both scripts and strategy cannot go together.
Following is the definition of this stage in the template:-
test:
stage: promote-test-reports
image: "495283289134.dkr.ecr.us-east-1.amazonaws.com/core/my-linux:centos7"
before_script:
- "yum install unzip -y"
variables:
GITLAB_PRIVATE_TOKEN: $GITLAB_TOKEN
allow_failure: true
script:
- 'cd $CI_PROJECT_DIR'
- 'echo running'
when: always
rules:
- if: $CI_PIPELINE_SOURCE == 'web'
artifacts:
reports:
junit: "**/testresult.xml"
coverage_report:
coverage_format: cobertura
path: "**/**/coverage.xml"
coverage: '/TOTAL\s+\d+\s+\d+\s+(\d+%)/'
The idea of using strategy attribute failed. I cannot remove the logic of scripts in the template as well. May I know what is the alternate way of running my job(promote-test-reports) in the end and remember it is going to be an OR condition that run either after child_pipeline1 or child_pipeline2
I would really appreciate your help
Finally, I was able to do it by putting the strategy: depend on child pipelines. I was doing it incorrectly earlier by doing on stage, promote-test-reports

Adds needs relations to GitLab CI yaml but got an error: the job was not added to the pipeline

I am trying to add needs between jobs in the Gitlab CI yaml configuration file.
stages:
- build
- test
- package
- deploy
maven-build:
stage: build
only:
- merge_requests
- master
- branches
...
test:
stage: test
needs: [ "maven-build" ]
only:
- merge_requests
- master
...
docker-build:
stage: package
needs: [ "test" ]
only:
- master
...
deploy-stage:
stage: deploy
needs: [ "docker-build" ]
only:
- master
...
deploy-prod:
stage: deploy
needs: [ "docker-build" ]
only:
- master
when: manual
...
I have used the GitLab CI online lint tools to check my syntax, it is correct.
But when I pushed the codes, it always complains:
'test' job needs 'maven-build' job
but it was not added to the pipeline
You can also test your .gitlab-ci.yml in CI Lint
The GitLab CI did not run at all.
Update: Finally I made it. I think the needs position is sensitive, move all needs under the stage, it works. My original scripts included some other configuration between them.
CI-jobs that depend on each other need to have the same limitations!
In your case that would mean to share the same only targets:
stages:
- build
- test
maven-build:
stage: build
only:
- merge_requests
- master
- branches
test:
stage: test
needs: [ "maven-build" ]
only:
- merge_requests
- master
- branches
that should work from my experience^^
Finally I made it. I think the needs position is sensitive, move all needs under the stage, it works
Actually... that might no longer be the case with GitLab 14.2 (August 2021):
Stageless pipelines
Using the needs keyword in your pipeline configuration helps to reduce cycle times by ignoring stage ordering and running jobs without waiting for others to complete.
Previously, needs could only be used between jobs on different stages.
In this release, we’ve removed this limitation so you can define a needs relationship between any job you want.
As a result, you can now create a complete CI/CD pipeline without using stages by including needs in every job to implicitly configure the execution order.
This lets you define a less verbose pipeline that takes less time to create and can run even faster.
See Documentation and Issue.
The rule in both jobs should be that same or otherwise GitLab cannot create job dependency between the jobs when the trigger rule is different.
I don't know why, but if the jobs are in different stages (as in my case), you have to define the jobs that will be done later with "." at the start.
Another interesting thing is GitLab's own CI/CD Lint online text editor does not complain there is an error. So you have to start the pipeline to see the error.
Below, notice the "." in ".success_notification" and ".failure_notification"
stages:
- prepare
- build_and_test
- deploy
- notification
#SOME CODE
build-StandaloneWindows64:
<<: *build
image: $IMAGE:$UNITY_VERSION-windows-mono-$IMAGE_VERSION
variables:
BUILD_TARGET: StandaloneWindows64
.success_notification:
needs:
- job: "build-StandaloneWindows64"
artifacts: true
stage: notification
script:
- wget https://raw.githubusercontent.com/DiscordHooks/gitlab-ci-discord-webhook/master/send.sh
- chmod +x send.sh
- ./send.sh success $WEBHOOK_URL
when: on_success
.failure_notification:
needs:
- job: "build-StandaloneWindows64"
artifacts: true
stage: notification
script:
- wget https://raw.githubusercontent.com/DiscordHooks/gitlab-ci-discord-webhook/master/send.sh
- chmod +x send.sh
- ./send.sh failure $WEBHOOK_URL
when: on_failure
#SOME CODE

run gitlab jobs sequentially

I have two simple stages. (build and test). And I want jobs in the pipeline to run sequentially.
Actually, I want when I run the test job, it doesn't run until the build job was passed completely.
My gitlab file:
stages:
- build
- test
build:
stage: build
script:
- mvn clean package
only:
- merge_requests
test:
stage: test
services:
script:
- mvn verify
- mvn jacoco:report
artifacts:
reports:
junit:
- access/target/surefire-reports/TEST-*.xml
paths:
- access/target/site/jacoco
expire_in: 1 week
only:
- merge_requests
Can I add
needs:
- build
in the test stage?
Based on the simplicity of your build file, i do not think that you actively need the needs. Based on the documentation, all stages are executed sequentially.
The pitfall you are in right now, is the only reference. The build stage will run for any branch, and for that ignore merge requests. if you add a only directive to your build job, you might get the result you are looking for like:
build:
stage: build
script:
- mvn clean package
only:
- merge_requests
- master # might be main, develop, etc. what ever your longliving branches are
This way it will not be triggered for each branch but only for merge requests and the long living branches. see the only documentation. Now the execution is not assigned to the branch but to the merge request, and you will have your expected outcome (at least what i assume)

Make a stage happen in gitlab-ci if one of two other stages completed

I have a pipeline that runs automatically when code is pushed to gitlab. There's a terraform apply step that I want to be able to run manually in one case (resources destroyed/recreated) and automatically in another (resources simply added or destroyed.) I almost got this with a manual step but can't see how to get the pipeline to be automatic in the safe case. The manual terraform apply step would not be the last in the pipeline.
Is it possible to say 'do step C if step A completed or step B completed'? Kind of branch the pipeline? Or could I do it with two pipelines, and failure in one triggers the other?
Current partial test code (gitlab CI yaml) here:
# stop with a warning if resources will be created and destroyed
check:
stage: check
script:
- ./terraformCheck.sh
allow_failure: true
# Apply changes manually, whether there is a warning or not
override:
stage: deploy
environment:
name: production
script:
- ./terraformApply.sh
dependencies:
- plan
when: manual
allow_failure: false
only:
- master
log:
stage: log
environment:
name: production
script:
- ./terraformLog.sh
when: always
only:
- master

How to control a stage play based on previous stage result without using artifacts?

We have a project hosted on an internal Gitlab installation.
The Pipeline of the project has 3 stages:
Build
Tests
Deploy
The objective is to hide or disable the Deploy stage when Tests fails
The problem is that we can't use artifacts because they are lost each time our machines reboot.
My question: Is there an alternative solution to artifacts to achieve this task?
The used .gitlab-ci.yml looks like this:
stages:
- build
- tests
- deploy
build_job:
stage: build
tags:
# - ....
before_script:
# - ....
script:
# - ....
when: manual
only:
- develop
- master
all_tests:
stage: tests
tags:
# - ....
before_script:
# - ....
script:
# - ....
when: manual
only:
- develop
- master
prod:
stage: deploy
tags:
# - ....
script:
# - ....
when: manual
environment: prod
I think you might have misunderstood the purpose of the built-in CI. The goal is to have building and testing all automated on each commit or at least every push. Having all tasks set to manual execution gives you almost no advantage over external CI tools like Jenkins or Bamboo. Your only advantage to local execution of the targets right now is having visibility in a central place.
That said there is no way to conditionally show or hide CI tasks, because it's against the basic idea. If you insist on your idea, you could look up the artifacts of the previous stages and abort the manual execution in case something is wrong.
The problem is that we can't use artifacts because they are lost each time our machines reboot
AFAIK artifacts are uploaded to the master and not saved on the runners. You should be fine having your artifacts passed from stage to stage.
By the way, the default for when is on_success which means to execute build only when all builds from prior stages succeed.

Resources