GitLab Pipeline: Needs Job to execute only when previous job fail in multi-env. pipeline - gitlab

I need some help, with these:
I have these pipeline:
(4) Stages:
Build
Deploy
Test
Rollback
Each one of these stages have two jobs, each Env, means diferent VM, I need the Rollback stage to be execute only when the respective previous fase (test) fails, ex:
The Job-4-RollB have to execute only when the Job-3-Test Fails, not any other.
I hope I explained well.
Pipeline-Diagram

use when: on_failure to do this. Unfortunately, this triggers when any job in the previous stage fails. So, you'll need some method to make the distinction which environment(s) to rollback in the final stage.
If you must have only these 4 pipelines stages, one way to make that determination would be to pass artifacts to the rollback job in order for the rollback job to know whether to perform the rollback. For example, an artifact containing either a 0 or 1.
.env-test:
stage: test
script:
- make test $ENV_NAME > results.txt
artifacts:
paths: # could also consider `artifacts:reports:dotenv`
- results.txt
when: always
Env-1-Test:
extends: .env-test
variables:
ENV_NAME: "1"
Env-2-Test:
extends: .env-test
variables:
ENV_NAME: "2"
.rollback:
when: on_failure # runs when 1 or more jobs from previous stage fail
stage: rollback
script: |
test_result="$(cat results.txt)"
if [[ "${test_results}" == "1" ]]; then
make rollback 1
else
echo "no rollback needed for env 1"
fi
Env-1-Rollback:
extends: .rollback
needs: [Env-1-Test] # only get the Env-1-Test artifact, start right away
Env-2-Rollback:
extends: .rollback
needs: [Env-2-Test] # same, but for Env 2
One advantage of this approach is that needs: will allow the rollback job to run immediately after the corresponding test job fails.
This is probably the best way to handle this scenario.
You can orchestrate a pipeline to get the behavior that each rollback job will strictly only run when the corresponding env fails, but it will require a far more verbose job configuration. Basically every job would need to declare needs: AND every environment needs its own stages for build/test/deploy.
OR
You could make use of parent-child pipelines for each environment, such that each child pipeline's rollback stage corresponds to just 1 job/environment.
Env-1-Child-Trigger:
trigger:
include: path/to/pipeline.yaml
variables:
ENV_NAME: "1"
Env-2-Child-Trigger:
trigger:
include: path/to/pipeline.yaml
variables:
ENV_NAME: "2"
Where your pipeline.yaml describes a singular build/deploy/test/rollback pipeline.
This is probably the most concise configuration.

Related

Run a pre job before GitLab pipeline

I want to run a job each time a new pipeline gets triggered. It's a kind of preparation job which should always be executed before every other job defined inside the .gitlab-ci.yml
For Example
stages:
- build
- test
my-prep-job:
stage: .pre
script:
# this is the job I want to run every time a pipeline gets triggered before other jobs
# also it will have an artifact that I want to use in rest of the job
...
artifacts:
...
Build:
stage: build
...
Test:
stage: test
....
Please, let me know if this is possible or if there is other way around.
Thanks in Advance...
Edit
I did try adding .pre under stages.
Thing is I had to rewrite the rules and add it to my-prep-job stages as well.
stages:
- .pre # I did add it over here
- build
- test
Also I had to add the rules to this stage as well so that it would not run on it's own on just a normal commit/push.
Is there any possibility to extend ".pre" stage of GitLab pipeline?
You could use !reference tags to include certain keyword sections.
For example:
.pre
script:
- echo from pre
example:
stage: test
script:
- !reference [.pre, script]
- ...
Will include the script part of .pre into the example job.
You can use !reference for most of the job keywords like artifacts or rules

How to run Job when Pipeline was triggered manually

I setup jobs to run only when pushing/merging to branch "dev", but I also want it so I'm able to run them if I trigger that pipeline manually. Something like this:
test:
stage: test
<this step should be run always>
build:
stage: build
rules:
- if: $CI_COMMIT_REF_NAME == "dev"
- if: <also run if the pipeline was run manually, but skip if it was triggered by something else>
This job is defined in a child "trigger" pipeline. This is how the parent looks like:
include:
- template: 'Workflows/MergeRequest-Pipelines.gitlab-ci.yml'
stages:
- triggers
microservice_a:
stage: triggers
trigger:
include: microservice_a/.gitlab-ci.microservice_a.yml
strategy: depend
rules:
- changes:
- microservice_a/*
The effect I want to achieve is:
Run test in all cases
Run build in the child pipeline only when pushing/merging to "dev"
Also run the build job when the pipeline is run maually
Do not run the build job on any other cases (like a MR)
The rules examples showcase:
job:
script: echo "Hello, Rules!"
rules:
- if: '$CI_PIPELINE_SOURCE == "merge_request_event"'
when: manual
allow_failure: true
- if: '$CI_PIPELINE_SOURCE == "schedule"'
The when:manual should be enough in your case: it does require that a job doesn’t run unless a user starts it.
Bonus question: This job is defined in a child "trigger" pipeline
Then it is related to gitlab-org/gitlab issue 201938, which is supposed to be fixed with GitLab 13.5 (Oct. 2020), but that only allow manual actions for parent-child pipeline (illustrated by this thread)
Double-check the environment variables as set in your child job
echo $CI_JOB_MANUAL
If true, that would indicate a job part of a manual triggered job.
While issue 22448 ("$CI_JOB_MANUAL should be set in all dependent jobs") points to this option not working, it includes a workaround.

Adds needs relations to GitLab CI yaml but got an error: the job was not added to the pipeline

I am trying to add needs between jobs in the Gitlab CI yaml configuration file.
stages:
- build
- test
- package
- deploy
maven-build:
stage: build
only:
- merge_requests
- master
- branches
...
test:
stage: test
needs: [ "maven-build" ]
only:
- merge_requests
- master
...
docker-build:
stage: package
needs: [ "test" ]
only:
- master
...
deploy-stage:
stage: deploy
needs: [ "docker-build" ]
only:
- master
...
deploy-prod:
stage: deploy
needs: [ "docker-build" ]
only:
- master
when: manual
...
I have used the GitLab CI online lint tools to check my syntax, it is correct.
But when I pushed the codes, it always complains:
'test' job needs 'maven-build' job
but it was not added to the pipeline
You can also test your .gitlab-ci.yml in CI Lint
The GitLab CI did not run at all.
Update: Finally I made it. I think the needs position is sensitive, move all needs under the stage, it works. My original scripts included some other configuration between them.
CI-jobs that depend on each other need to have the same limitations!
In your case that would mean to share the same only targets:
stages:
- build
- test
maven-build:
stage: build
only:
- merge_requests
- master
- branches
test:
stage: test
needs: [ "maven-build" ]
only:
- merge_requests
- master
- branches
that should work from my experience^^
Finally I made it. I think the needs position is sensitive, move all needs under the stage, it works
Actually... that might no longer be the case with GitLab 14.2 (August 2021):
Stageless pipelines
Using the needs keyword in your pipeline configuration helps to reduce cycle times by ignoring stage ordering and running jobs without waiting for others to complete.
Previously, needs could only be used between jobs on different stages.
In this release, we’ve removed this limitation so you can define a needs relationship between any job you want.
As a result, you can now create a complete CI/CD pipeline without using stages by including needs in every job to implicitly configure the execution order.
This lets you define a less verbose pipeline that takes less time to create and can run even faster.
See Documentation and Issue.
The rule in both jobs should be that same or otherwise GitLab cannot create job dependency between the jobs when the trigger rule is different.
I don't know why, but if the jobs are in different stages (as in my case), you have to define the jobs that will be done later with "." at the start.
Another interesting thing is GitLab's own CI/CD Lint online text editor does not complain there is an error. So you have to start the pipeline to see the error.
Below, notice the "." in ".success_notification" and ".failure_notification"
stages:
- prepare
- build_and_test
- deploy
- notification
#SOME CODE
build-StandaloneWindows64:
<<: *build
image: $IMAGE:$UNITY_VERSION-windows-mono-$IMAGE_VERSION
variables:
BUILD_TARGET: StandaloneWindows64
.success_notification:
needs:
- job: "build-StandaloneWindows64"
artifacts: true
stage: notification
script:
- wget https://raw.githubusercontent.com/DiscordHooks/gitlab-ci-discord-webhook/master/send.sh
- chmod +x send.sh
- ./send.sh success $WEBHOOK_URL
when: on_success
.failure_notification:
needs:
- job: "build-StandaloneWindows64"
artifacts: true
stage: notification
script:
- wget https://raw.githubusercontent.com/DiscordHooks/gitlab-ci-discord-webhook/master/send.sh
- chmod +x send.sh
- ./send.sh failure $WEBHOOK_URL
when: on_failure
#SOME CODE

Gitlab CI can trigger other project pipeline stage?

I have an A project and an E2E project. I want to deploy A project trigger E2E pipeline run test but I just want the trigger test stage. we don't need trigger E2E to build deploy ...etc
e2e_tests:
stage: test
trigger:
project: project/E2E
branch: master
strategy: depend
stage: test
I have tried to use the stage in config. but got error unknown keys: stage
have any suggestions?
In your E2E project, the one that receives the trigger, you can tell a job to only run when the pipeline source is a trigger using the rules syntax:
build-from-trigger:
stage: build
when: never
rules:
- if: "$CI_COMMIT_REF_NAME == 'master' && $CI_PIPELINE_SOURCE == 'trigger'
when: always
script:
- ./build.sh # this is just an example, you'd run whatever you normally would here
The first when statement, when: never sets the default for the job. By default, this job will never run. Then using the rule syntax, we set a condition that will allow the job to run. If the CI_COMMIT_REF_NAME variable (the branch or tag name) is master AND the CI_PIPELINE_SOURCE variable (whatever kicked off this pipeline) is a trigger, then we run this job.
You can read about the when keyword here: https://docs.gitlab.com/ee/ci/yaml/#when, and you can read the rules documentation here: https://docs.gitlab.com/ee/ci/yaml/#rules

Make a stage happen in gitlab-ci if one of two other stages completed

I have a pipeline that runs automatically when code is pushed to gitlab. There's a terraform apply step that I want to be able to run manually in one case (resources destroyed/recreated) and automatically in another (resources simply added or destroyed.) I almost got this with a manual step but can't see how to get the pipeline to be automatic in the safe case. The manual terraform apply step would not be the last in the pipeline.
Is it possible to say 'do step C if step A completed or step B completed'? Kind of branch the pipeline? Or could I do it with two pipelines, and failure in one triggers the other?
Current partial test code (gitlab CI yaml) here:
# stop with a warning if resources will be created and destroyed
check:
stage: check
script:
- ./terraformCheck.sh
allow_failure: true
# Apply changes manually, whether there is a warning or not
override:
stage: deploy
environment:
name: production
script:
- ./terraformApply.sh
dependencies:
- plan
when: manual
allow_failure: false
only:
- master
log:
stage: log
environment:
name: production
script:
- ./terraformLog.sh
when: always
only:
- master

Resources