I am using needs in .gitlab-ci.yml to run a job say job3 when either job1 and job2 are success or when either of jobs, job1 or job2 are failed. Below is my code
stages:
- job1
- job2
- job3
job1:
script:
----some code
job2:
script:
----some code
job3:
script:
----some code
needs: [job1 & job2, !job1 | !job2]
Can someone guide me on this?
Related
Is it possible to pass variable value from job1 to job2 tags in gitlab-ci.yml.
I tried the following
job1:
stage:
build
tags:
[BUILD-POOL]
script:
- echo "BUILD_MACHINE=10.15.63.4" >> build.env
artifacts:
reports:
dotenv: build.env
rules:
- if: '$CI_PIPELINE_SOURCE == "web"'
job2:
stage:
test
tags:
- "$BUILD_MACHINE"
script:
- echo "job2 test"
needs:
- job1
rules:
- if: '$CI_PIPELINE_SOURCE == "web"'
I am trying this because, in job1 i am using tags:[BUILD-POOL] which will select an available free VM.
So i want to build job2 in the same VM where job1 is built.
but for me above mentioned code IP is not assigned to tags in job2.
any suggestions/help will be appreciated.
Consider I have the following .gitlab-ci.yml, and they are running fine in Pipeline:
image:
stages:
- job1
- job2
- job3
job1:
dependencies: []
stage: job1
script:
job2:
dependencies: []
stage: job2
script:
job3:
dependencies: []
stage: job3
script:
Now, what I would like to achieve is:
to exclude job3 from the Pipeline
and to create a schedule only for this job3
I have read option to configure schedule, using rules, so I edit the job3 as follow:
job3:
dependencies: []
stage: job3
rules:
- if: $CI_PIPELINE_SOURCE == "schedule"
script:
And I setup a schedule. But unfortunately, it doesn't work out: job3 is still part of the Pipeline, and my schedule runs all the jobs.
What is missing in this configuration? Or is it possible to achieve it.
define rules for every job:
job1:
dependencies: []
stage: job1
rules:
- if: '$CI_PIPELINE_SOURCE != "schedule"'
when: always
script:
job2:
dependencies: []
stage: job2
rules:
- if: '$CI_PIPELINE_SOURCE != "schedule"'
when: always
script:
job3:
dependencies: []
stage: job3
rules:
- if: '$CI_PIPELINE_SOURCE == "schedule"'
when: always
script:
You'll have to add rules to every job to get the behavior you want.
On the jobs you want to only run on the scheduled pipelines,
rules: # e.g., for job3
- if: '$CI_PIPELINE_SOURCE == "schedule"'
Because this rule will only match for scheduled pipelines, a job with these rules will only trigger on a schedule (when no rule matches, the job is excluded).
On every other job, you should add a rule with an inverse rule to have them excluded from scheduled pipelines:
rules: # e.g., for job1 and job2
- if: '$CI_PIPELINE_SOURCE != "schedule"'
This rule should match in every case except when the pipeline is triggered by a schedule, meaning these jobs will be included in all non-schedule pipelines.
I can't find a satisfying solution for my case.
I want to start a job manually only when a certain previous job has failed. The job in question dose a validation. I want to make the next job manual so that the user acknowledges that something wasn't good and make him investigate the problem and continue only if he deems that the fail can be ignored.
stages:
- test
- validate
- build
lint:
stage: test
allow_failure: true
script:
- npm run lint
check:reducer:
stage: test
allow_failure: true
script:
- chmod +x ./check-reducers.py
- ./check-reducers.py $CI_PROJECT_ID $CI_COMMIT_BRANCH
except:
- master
- development
fail:pause:
stage: validate
allow_failure: true
script:
- echo The 'validate:reducer' job has failed
- echo Check the job and decide if this should continue
when: manual
needs: ["check:reducer"]
build:
stage: build
script:
- cp --recursive _meta/ $BUILD_PATH
- npm run build
artifacts:
name: "build"
expire_in: 1 week
paths:
- $BUILD_PATH
needs: ["fail:pause"]
I would like that if check:reducer fails, fail:pause to wait for the user input. If check:reducer exits with 0, fail:pause should start automatically or buildshould start.
Unfortunately, this isn't possible as the when keyword is evaluated at the very start of the pipeline (I.e., before any job execution has run), so you cannot set the when condition based on the previous job status.
This is possible if you use a generated gitlab-ci.yml as a child workflow.
stages:
- test
- approve
- deploy
generate-config:
stage: test
script:
- ./bin/run-tests.sh
- ./bin/generate-workflows.sh $?
artifacts:
paths:
- deploy-gitlab-ci.yml
trigger-workflows:
stage: deploy
trigger:
include:
- artifact: deploy-gitlab-ci.yml
job: generate-config
The generate-workflows.sh script writes out a deploy-gitlab-ci.yml that either has the approval job or not based on the return code of the run-test.sh passed as the first argument to the script.
You can make it easier on yourself using includes, where you either include the approve step or not in the generated deploy-gitlab-ci.yml file, and make the steps in the deploy optionally need the approal.
approve-gitlab-ci.yml
approve:
stage: approve
when: manual
script:
- echo "Approved!"
deploy-gitlab-ci.yml
deploy:
stage: deploy
needs:
- job: approve
- optional: true
Then the deploy-gitlab-ci.yml is simply an includes with the jobs to run:
includes:
- approve-gitlab-ci.yml
- deploy-gitlab-ci.yml
I need to specify the stage instead of triggering the whole thing. This is how it's currently configured.
stage: bridge
trigger:
project: user/project2
branch: master
strategy: depend
You can specify variables in the parent pipeline which you can check in the child pipeline and specify the job creation via rules. In the below example only job1 is run as only the variable $FOO is set in the parent pipeline.
You parent pipeline:
test:
stage: test
variables:
FOO: bar
trigger:
project: user/project2
branch: master
strategy: depend
Your child pipeline can look like this:
stages:
- job1
- job2
job1:
stage: job1
script:
- echo "job1"
rules:
- if: '$FOO'
job2:
stage: job2
script:
- echo "job2"
rules:
- if: '$BAR'
I was setting up a gitlab environment. After each push 1 am running 5 test cases. But if the any of the test cases is falling other testcase are skipped.
I want to run all the cases. Because they are independent to each other.
gitlab-ci.yml
stages:
- build
- unit_test_1
- unit_test_2
- unit_test_3
job1:
stage: build
script:
- bash build.sh
job2:
stage: unit_test_1
script:
- bash ./unit_test_1.sh
job3:
stage: unit_test_2
script:
- bash ./unit_test_2.sh
job4:
stage: unit_test_3
script:
- bash ./unit_test_3.sh
If uint_test_1.sh is failing. Other tests are skipped.
You can use the when property to make your jobs run every time, regardless of the status of jobs from prior stages of the build.
stages:
- build
- test
job1:
stage: build
script:
- bash build.sh
job2:
stage: test
when: always
script:
- bash ./unit_test_1.sh
job3:
stage: test
when: always
script:
- bash ./unit_test_2.sh
job4:
stage: test
when: always
script:
- bash ./unit_test_3.sh
Also, if you want to make sure you never have jobs running in parallel then you can configure your runners with concurrency limits.
Configuring it globally will limit all your runners to only run 1 job concurrently between all runners.
Configuring it per runner will limit that runner to only run 1 job concurrently per build token.
You can try like this:
gitlab-ci.yml
stages:
- build
- test
job1:
stage: build
script:
- bash build.sh
job2:
stage: test
script:
- bash ./unit_test_1.sh
job3:
stage: test
script:
- bash ./unit_test_2.sh
job4:
stage: test
script:
- bash ./unit_test_3.sh
The documentation say:
The ordering of elements in stages defines the ordering of builds' execution:
Builds of the same stage are run in parallel.
Builds of the next stage are run after the jobs from the previous stage complete successfully.
https://docs.gitlab.com/ce/ci/yaml/README.html#stages
To run in parallel you have to put the same stage name
https://docs.gitlab.com/ce/ci/pipelines.html#pipelines