I setup jobs to run only when pushing/merging to branch "dev", but I also want it so I'm able to run them if I trigger that pipeline manually. Something like this:
test:
stage: test
<this step should be run always>
build:
stage: build
rules:
- if: $CI_COMMIT_REF_NAME == "dev"
- if: <also run if the pipeline was run manually, but skip if it was triggered by something else>
This job is defined in a child "trigger" pipeline. This is how the parent looks like:
include:
- template: 'Workflows/MergeRequest-Pipelines.gitlab-ci.yml'
stages:
- triggers
microservice_a:
stage: triggers
trigger:
include: microservice_a/.gitlab-ci.microservice_a.yml
strategy: depend
rules:
- changes:
- microservice_a/*
The effect I want to achieve is:
Run test in all cases
Run build in the child pipeline only when pushing/merging to "dev"
Also run the build job when the pipeline is run maually
Do not run the build job on any other cases (like a MR)
The rules examples showcase:
job:
script: echo "Hello, Rules!"
rules:
- if: '$CI_PIPELINE_SOURCE == "merge_request_event"'
when: manual
allow_failure: true
- if: '$CI_PIPELINE_SOURCE == "schedule"'
The when:manual should be enough in your case: it does require that a job doesn’t run unless a user starts it.
Bonus question: This job is defined in a child "trigger" pipeline
Then it is related to gitlab-org/gitlab issue 201938, which is supposed to be fixed with GitLab 13.5 (Oct. 2020), but that only allow manual actions for parent-child pipeline (illustrated by this thread)
Double-check the environment variables as set in your child job
echo $CI_JOB_MANUAL
If true, that would indicate a job part of a manual triggered job.
While issue 22448 ("$CI_JOB_MANUAL should be set in all dependent jobs") points to this option not working, it includes a workaround.
Related
I'm new to this Gitlab CI/CD features, and I encountered the following issues.
I have these 2 jobs in my gitlab-ci.yml, the automation test and my deployment job.
automation_test_scheduled:
stage: test
script:
- yarn test:cypress
only:
- schedules
deploy_to_staging:
stage: deploy-staging
environment: staging
only:
- staging
I want to run my automation test automatically on a daily basis and I have created a new pipeline schedule against my staging branch.
however, when the scheduler is triggered, it also runs my deployment job, which is not needed because I only want my automation test to run in the scheduled pipeline.
Does this happen because my deploy_to_staging job has only: - staging rules? if so, how can I set my scheduled pipeline to only run the automation test without triggering another job?
If you wanted to do this with only/except, it would probably be sufficient to add
except:
- schedules
to your deployment job.
Though as
Though notably, the rules based system is preferred at this point.
This also allows for more expressive and detailed decisions when it comes to running jobs.
The simplest way to set the rules for the two jobs would be:
automation_test_scheduled:
stage: test
script:
- yarn test:cypress
rules:
- if: $CI_PIPELINE_SOURCE == "schedule"
deploy_to_staging:
stage: deploy-staging
environment: staging
rules:
- if: $CI_PIPELINE_SOURCE == "schedule"
when: never
- if: $CI_COMMIT_REF_SLUG == "staging"
And that might be all you need.
Though when it comes to rules, a particularly convenient way of handling them is defining some common rules for the configuration, and reusing these through yaml anchors. The following are some reusable definitions for your case:
.definitions:
rules:
- &if-scheduled
if: $CI_PIPELINE_SOURCE == "schedule"
- ¬-scheduled
if: $CI_PIPELINE_SOURCE == "schedule"
when: never
- &if-staging
if: $CI_COMMIT_REF_SLUG == "staging"
And after that you could use them in your jobs like this:
automation_test_scheduled:
stage: test
script:
- yarn test:cypress
rules:
- *if-scheduled
deploy_to_staging:
stage: deploy-staging
environment: staging
rules:
- *not-scheduled
- *if-staging
This way of handling the rules makes it a bit easier to have a overview, and to reuse rules, which in large configurations definitely makes sense
You should use rules instead of only as the latter is not in active development any more.
With that in mind you can change to the following rules clause using the predefined variables CI_COMMIT_REF_SLUG and CI_PIPELINE_SOURCE. The automation_test_scheduled is only run on the branch staging if triggered by a schedule and the deploy_to_staging job is run on any change on the staging branch.
automation_test_scheduled:
stage: test
script:
- yarn test:cypress
rules:
- if: '$CI_COMMIT_REF_SLUG == "staging" && $CI_PIPELINE_SOURCE == "schedule"'
deploy_to_staging:
stage: deploy-staging
environment: staging
rules:
- if: '$CI_COMMIT_REF_SLUG == "staging"'
cleanup job are defined as:
cleanup:
stage:e2e
needs:
- job: deploy
stage: e2e
script:
- make clean
resource_group: development
when: always
Main goal is to run cleanup job when deploy job successes or failed (whatever) why i use when:always and added needs:- job: deploy.
But problem is if any previous job failed it also trigger to run cleanup job, even if deploy job does not run.
I think you need to remove the when: always definition from you jobs config.
Instead you should adjust your needs to be optional like given below - that way pipeline will run
cleanup:
stage: e2e
needs:
- job: deploy
optional: true
script:
- make clean
resource_group: development
Another thing you could try would be to change your rules to say run this job on_success or on_failure
cleanup:
stage: e2e
needs:
- job: deploy
script:
- make clean
resource_group: development
rules:
# Dummy if rule to showcase multiple rules
- if: $CI_COMMIT_TAG
when: manual
- when: on_success
- when: on_failure
That second way the job would run if previous job succeeds and also if not, but would be limited to execution after deploy job is run.
Did not test this but I think this might work.
I need a task that I can either manually execute or is run automatically on nightly schedule. I found this solution:
rules:
- changes:
- scheduled
when: always
- when: manual
The problem with the solution is that when a new pipeline is created and the task isn't run the pipeline is stuck in blocked state until I run it manually.
To avoid this I found a suggested workaround to add:
allow_failure: true
But this again brings a problem that if the task fails on nigtly run then the pipeline doesent fail and I don't get e-mail notifications.
Is there a way to solve this?
You can set allow_failure conditionally using rules:. So, instead of setting the allow_faulure: key on the job, set it in any rule that causes the job to be 'manual'.
rules:
- changes:
- scheduled
when: always
- when: manual
allow_failure: true
Also, based on your description it would probably be best to use a rule like so:
myjob:
rules:
- if: $CI_PIPELINE_SOURCE == "schedule"
when: always
allow_failure: false
- when: manual
allow_failure: true
Another alternative to prevent your pipeline being blocked by this job would be to have it run in the .post stage and use needs: [] to have it run immediately. That way, it'll never cause other jobs to wait on it.
myjob:
needs: []
stage: .post
# ...
There might be a better solution, but another workaround could be to have two separate jobs with the rules in and template out everything else. For example:
.build_template: &build_template
image: ubuntu:18.04
script:
- echo "hello world"
build_manual:
<<: *build_template
when: manual
except:
- schedules
build_nightly:
<<: *build_template
only:
- schedules
I have a gitlab-ci.yml file like the following:
stages:
- test
- job1
- job2
test:
stage: test
rules:
- if: '$CI_PIPELINE_SOURCE == "merge_request_event"'
script:
...
myjob1:
stage: job1
script:
...
myjob2:
stage: job2
script:
...
According to the documentation HERE (or at least how I understood it), the first stage/job is only run when I create a merge request.
This is true, but the next stage (job1) is run in parallel when the first job (test) has been started. As far as I understand the stages which are defined in the order test -> job1 -> job2) always run in sequence.
So what am I doing wrong? Why do job test and job1 run in parallel, and not in sequence as expected?
After a lot of trial-end-errors and reading and rereading parts of the really unclear and confusing documentation I might have found a solution.
First, the stage you only want to run on a merge request (or which you do not want to run if you schedule a trigger or start the pipeline manually), you need to change that stage as follows:
test:
rules:
- if: $CI_PIPELINE_SOURCE == "web" || $CI_PIPELINE_SOURCE == "schedule"
when: never
- when: on_success
stage: test
script:
- 'echo "Running Test"'
- 'echo $CI_PIPELINE_SOURCE'
Here, you define a rule that checks if the variable CI_PIPELINE_SOURCE is either web or schedule. If the variable is web this means a manual pipeline trigger (i.e. you pressed manually on Run pipeline, which is not explained in the documentation), or if the pipeline is triggered by a schedule (not tested, but I assume that is what schedule means).
So if the pipeline is triggered by a scheduled event or manually, never tells gitlab to not execute that stage. The when: on_success is like an else statement, which tells gitlab to run that stage in any other case.
However, that is not the complete story. Because when you use git to make changes to the code and push it to gitlab via git push, you have two triggers in gitlab! A trigger merge_request_event and a trigger push. That means, the pipeline is started twice!
To avoid the pipeline started twice you need to use the workflow key, which helps to define if the pipeline (=workflow) is run or not. (The term workflow seems to mean pipeline). Here is the code to be put into the gitlab-ci.yml file:
workflow:
rules:
- if: '$CI_PIPELINE_SOURCE == "merge_request_event"'
when: never
- when: always
This construct suppresses the pipeline to be run when the trigger is merge_request_event. In that case, the additional pipeline is not run. In all other cases (when the trigger is e.g. push), the pipeline is run.
So here is the complete gitlab-ci.yaml code:
workflow:
rules:
- if: '$CI_PIPELINE_SOURCE == "merge_request_event"'
when: never
- when: always
stages:
- test
- stage1
- stage2
test:
rules:
- if: $CI_PIPELINE_SOURCE == "web" || $CI_PIPELINE_SOURCE == "schedule"
when: never
- when: on_success
stage: test
script:
- 'echo "Running Test"'
my_stage1:
stage: stage1
script:
- 'echo "Running stage 1"'
my_stage2:
stage: stage2
script:
- 'echo "Running stage 2"'
If you make a git push then one pipeline is run with the stages test, my_stage1 and my_stage2, and when you start the pipeline manually or if it is triggered by a schedule, one pipeline is started with the stages my_stage1 and my_stage2.
As to why this is so complicated and confusing, I have not the slightest idea.
I have an A project and an E2E project. I want to deploy A project trigger E2E pipeline run test but I just want the trigger test stage. we don't need trigger E2E to build deploy ...etc
e2e_tests:
stage: test
trigger:
project: project/E2E
branch: master
strategy: depend
stage: test
I have tried to use the stage in config. but got error unknown keys: stage
have any suggestions?
In your E2E project, the one that receives the trigger, you can tell a job to only run when the pipeline source is a trigger using the rules syntax:
build-from-trigger:
stage: build
when: never
rules:
- if: "$CI_COMMIT_REF_NAME == 'master' && $CI_PIPELINE_SOURCE == 'trigger'
when: always
script:
- ./build.sh # this is just an example, you'd run whatever you normally would here
The first when statement, when: never sets the default for the job. By default, this job will never run. Then using the rule syntax, we set a condition that will allow the job to run. If the CI_COMMIT_REF_NAME variable (the branch or tag name) is master AND the CI_PIPELINE_SOURCE variable (whatever kicked off this pipeline) is a trigger, then we run this job.
You can read about the when keyword here: https://docs.gitlab.com/ee/ci/yaml/#when, and you can read the rules documentation here: https://docs.gitlab.com/ee/ci/yaml/#rules