How to make one job only run after another job passes in a GitLab pipeline - gitlab

From what I have just ran into, the "needs" line in a gitlab-ci.yml file only checks to see if the job that is defined in the "needs" line is being run - not if it passes or fails.
I ran the below code in my pipeline and the "build-latest" job runs even if the "test-
tag" job fails.
I only want the "build-latest" job to run if the "test-tag" job passes.
How is this achieved?
build-latest:
stage: publish
image:
name: gcr.io/go-containerregistry/crane:debug
entrypoint: [""]
rules:
#- if: $CI_COMMIT_TAG != null
- if: $CI_COMMIT_REF_NAME == "add-latest-tagging"
when: always
needs:
- test-tag
script:
- crane auth login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY

The issue lies with the fact that you added
when: always
It is true that since you specified needs, the build-latest job will need the job test-tag to execute first.
After test-tag job concluded it will evaluate if it should execute the build-latest job.
By adding the always clause to the build-latest job will force it to execute, even if the test-tag fails. Provided test-tag job has at least concluded
Long story sort, you should remove the when always clause

If you want a job to run only when one or more previous jobs pass, then you want to put it in a separate stage.
Not sure how you've broken up the jobs without more of the CI file, but assuming:
test-tag job is in stage: test
stage: publish comes after test
Then it should work the way you want simply by removing the needs: option from your build-latest job.

Related

GitLab Pipeline: Needs Job to execute only when previous job fail in multi-env. pipeline

I need some help, with these:
I have these pipeline:
(4) Stages:
Build
Deploy
Test
Rollback
Each one of these stages have two jobs, each Env, means diferent VM, I need the Rollback stage to be execute only when the respective previous fase (test) fails, ex:
The Job-4-RollB have to execute only when the Job-3-Test Fails, not any other.
I hope I explained well.
Pipeline-Diagram
use when: on_failure to do this. Unfortunately, this triggers when any job in the previous stage fails. So, you'll need some method to make the distinction which environment(s) to rollback in the final stage.
If you must have only these 4 pipelines stages, one way to make that determination would be to pass artifacts to the rollback job in order for the rollback job to know whether to perform the rollback. For example, an artifact containing either a 0 or 1.
.env-test:
stage: test
script:
- make test $ENV_NAME > results.txt
artifacts:
paths: # could also consider `artifacts:reports:dotenv`
- results.txt
when: always
Env-1-Test:
extends: .env-test
variables:
ENV_NAME: "1"
Env-2-Test:
extends: .env-test
variables:
ENV_NAME: "2"
.rollback:
when: on_failure # runs when 1 or more jobs from previous stage fail
stage: rollback
script: |
test_result="$(cat results.txt)"
if [[ "${test_results}" == "1" ]]; then
make rollback 1
else
echo "no rollback needed for env 1"
fi
Env-1-Rollback:
extends: .rollback
needs: [Env-1-Test] # only get the Env-1-Test artifact, start right away
Env-2-Rollback:
extends: .rollback
needs: [Env-2-Test] # same, but for Env 2
One advantage of this approach is that needs: will allow the rollback job to run immediately after the corresponding test job fails.
This is probably the best way to handle this scenario.
You can orchestrate a pipeline to get the behavior that each rollback job will strictly only run when the corresponding env fails, but it will require a far more verbose job configuration. Basically every job would need to declare needs: AND every environment needs its own stages for build/test/deploy.
OR
You could make use of parent-child pipelines for each environment, such that each child pipeline's rollback stage corresponds to just 1 job/environment.
Env-1-Child-Trigger:
trigger:
include: path/to/pipeline.yaml
variables:
ENV_NAME: "1"
Env-2-Child-Trigger:
trigger:
include: path/to/pipeline.yaml
variables:
ENV_NAME: "2"
Where your pipeline.yaml describes a singular build/deploy/test/rollback pipeline.
This is probably the most concise configuration.

Gitlab CI can trigger other project pipeline stage?

I have an A project and an E2E project. I want to deploy A project trigger E2E pipeline run test but I just want the trigger test stage. we don't need trigger E2E to build deploy ...etc
e2e_tests:
stage: test
trigger:
project: project/E2E
branch: master
strategy: depend
stage: test
I have tried to use the stage in config. but got error unknown keys: stage
have any suggestions?
In your E2E project, the one that receives the trigger, you can tell a job to only run when the pipeline source is a trigger using the rules syntax:
build-from-trigger:
stage: build
when: never
rules:
- if: "$CI_COMMIT_REF_NAME == 'master' && $CI_PIPELINE_SOURCE == 'trigger'
when: always
script:
- ./build.sh # this is just an example, you'd run whatever you normally would here
The first when statement, when: never sets the default for the job. By default, this job will never run. Then using the rule syntax, we set a condition that will allow the job to run. If the CI_COMMIT_REF_NAME variable (the branch or tag name) is master AND the CI_PIPELINE_SOURCE variable (whatever kicked off this pipeline) is a trigger, then we run this job.
You can read about the when keyword here: https://docs.gitlab.com/ee/ci/yaml/#when, and you can read the rules documentation here: https://docs.gitlab.com/ee/ci/yaml/#rules

How can I create manually-run GitLab pipeline jobs?

I would like to know how to manually trigger specific jobs in a project's CI pipeline.
Since there is only one gitlab-ci.yml file, I can define many jobs to be executed one after the other sequentially. But what if I want to start a manual CI pipeline that only carries out one job?
As I understand it, every time the pipeline will run, it will run all jobs, unless I use many only and similar parameters. For instance, when I have this simple pipeline config:
stages:
- build
build:
stage: build
script:
- npm i
- npm run build
- echo "successful build"
What do I do if I want to only run an echo job that runs a simple echo "hello" script, but does only that and only when I manually run it? There are no 'triggers' for a job like that afaik.
Is this even a possibility?
Thanks for the clarification!
Apparently, the solution is pretty simple, just needed to add a when: manual paramater to the job:
echo:
stage: echo
script:
- echo 'this is a manual job'
when: manual
Once that's done, the job can be triggered independently right here:

Gitlab-ci - Pipeline failing for no job

Here is my .gitlab-ci.yml file:
script1:
only:
refs:
- merge_requests
- master
changes:
- script1/**/*
script: echo 'script1 done'
script2:
only:
refs:
- merge_requests
- master
changes:
- script2/**/*
script: echo 'script2 done'
I want script1 to run whenever there is a change in script1 directory; likewise script2.
I tested these with a change in script1, a change in script2, change in both the directories, and no change in either of these directories.
Former 3 cases are passing as expected but 4th case, the one with no change in either directory, is failing.
In the overview, Gitlab gives the message
Could not retrieve the pipeline status. For troubleshooting steps, read thedocumentation.
In the Pipelines tab, I have an option to Run pipeline. Clicking on that gives the error
An error occurred while trying to run a new pipeline for this Merge Request.
If there is no job, I want the pipeline to succeed.
Gitlab pipelines do not have any independent validity outside of jobs. A pipeline, by definition, consists of one or more jobs. In your example 4 above no jobs are created. The simplest hack you can add to your pipeline is a job which always runs:
dummyjob:
script: exit 0

Is it possible to allow for a script in a CI/CD job to fail?

Entire jobs can be allowed to fail
job1:
stage: test
script:
- execute_script_that_will_fail
allow_failure: true
Is it possible to have, in a series of scripts, one that is allowed to fail (and others - not)?
job1:
stage: test
script:
- execute_script_that_MAY_fail_and_should_be_marked_somehow_in_this_config_as_such
- execute_script_that_MUST_NOT_fail
The rationale is that there may be scripts which are related, should be grouped together and sequential, and only some of them are allowed to fail.
An example could be a docker deployment with a build (must not fail), a stop of a container (which may fail if the container is not running) and a run (which must not fail).
My current workaround is to split this into separate jobs but this is an ugly hack:
stages:
- one
- two
- three
one:
stage: one
script:
- execute_script_that_MUST_NOT_fail
two:
stage: two
script:
- execute_script_that_MAY_fail
allow_failure: true
three:
stage: three
script:
- execute_script_that_MUST_NOT_fail
A job fails if any of the script steps inside it return a failed status. So you need to prevent that from happening, and the simplest way is to add || true to the step (or some logging like #ahogen suggests in a comment):
job1:
stage: test
script:
- execute_script_that_MAY_fail || true
- execute_script_that_MUST_NOT_fail

Resources