Gitlab scheduled pipeline also run another job not on schedule - gitlab

I'm new to this Gitlab CI/CD features, and I encountered the following issues.
I have these 2 jobs in my gitlab-ci.yml, the automation test and my deployment job.
automation_test_scheduled:
stage: test
script:
- yarn test:cypress
only:
- schedules
deploy_to_staging:
stage: deploy-staging
environment: staging
only:
- staging
I want to run my automation test automatically on a daily basis and I have created a new pipeline schedule against my staging branch.
however, when the scheduler is triggered, it also runs my deployment job, which is not needed because I only want my automation test to run in the scheduled pipeline.
Does this happen because my deploy_to_staging job has only: - staging rules? if so, how can I set my scheduled pipeline to only run the automation test without triggering another job?

If you wanted to do this with only/except, it would probably be sufficient to add
except:
- schedules
to your deployment job.
Though as
Though notably, the rules based system is preferred at this point.
This also allows for more expressive and detailed decisions when it comes to running jobs.
The simplest way to set the rules for the two jobs would be:
automation_test_scheduled:
stage: test
script:
- yarn test:cypress
rules:
- if: $CI_PIPELINE_SOURCE == "schedule"
deploy_to_staging:
stage: deploy-staging
environment: staging
rules:
- if: $CI_PIPELINE_SOURCE == "schedule"
when: never
- if: $CI_COMMIT_REF_SLUG == "staging"
And that might be all you need.
Though when it comes to rules, a particularly convenient way of handling them is defining some common rules for the configuration, and reusing these through yaml anchors. The following are some reusable definitions for your case:
.definitions:
rules:
- &if-scheduled
if: $CI_PIPELINE_SOURCE == "schedule"
- &not-scheduled
if: $CI_PIPELINE_SOURCE == "schedule"
when: never
- &if-staging
if: $CI_COMMIT_REF_SLUG == "staging"
And after that you could use them in your jobs like this:
automation_test_scheduled:
stage: test
script:
- yarn test:cypress
rules:
- *if-scheduled
deploy_to_staging:
stage: deploy-staging
environment: staging
rules:
- *not-scheduled
- *if-staging
This way of handling the rules makes it a bit easier to have a overview, and to reuse rules, which in large configurations definitely makes sense

You should use rules instead of only as the latter is not in active development any more.
With that in mind you can change to the following rules clause using the predefined variables CI_COMMIT_REF_SLUG and CI_PIPELINE_SOURCE. The automation_test_scheduled is only run on the branch staging if triggered by a schedule and the deploy_to_staging job is run on any change on the staging branch.
automation_test_scheduled:
stage: test
script:
- yarn test:cypress
rules:
- if: '$CI_COMMIT_REF_SLUG == "staging" && $CI_PIPELINE_SOURCE == "schedule"'
deploy_to_staging:
stage: deploy-staging
environment: staging
rules:
- if: '$CI_COMMIT_REF_SLUG == "staging"'

Related

Splitting stages into multiple pipelines

Lets assume I have few stages
stages:
- mr:stage1
- mr:stage2
- mr:stage3
On all jobs I have rule:
- if: '$CI_PIPELINE_SOURCE == "merge_request_event"'
So I am getting pipeline like that
How can I split my 3 stages into 2 pipelines? For example I would like to have mr:stage1 and mr:stage2 in one pipeline and if this pipeline is successful, mr:stage3 will invoke in separate pipeline.
Thx for help
Each project or repository in GitLab has a single .gitlab-ci.yml file, and therefore a single pipeline. There is no way to have multiple pipelines like this.
You can include other yml files in your base .gitlab-ci.yml file, but this is solely for your convenience as the pipeline author. At runtime, the included files are copy and pasted into a single yml file, so there is still only ever a single pipeline.
However, that single pipeline can look drastically different depending on your use cases.
Let's say you have 3 jobs that only run on push events to a non-default branch. When you push to the default branch, these 3 jobs will not run. Let's say you have another 2 jobs that only run when there's a push event to the default branch. When you push here, these jobs will run, but if you push to a feature branch, they will not.
This scenario might look like this:
stages:
- build
- test
- deploy
Pull in Backend Dependencies nonprod:
stage: build
rules:
- if: $CI_PIPELINE_SOURCE === 'push' && $CI_COMMIT_REF_NAME !== $CI_DEFAULT_BRANCH
when: always
- when: never
script:
- # run your dependency manager to pull in backend dependencies, ie Composer for PHP, including dependencies only used in lower environments, like Unit Testing libraries, etc.
# ...
Pull in Frontend Dependencies nonprod:
stage: build
rules:
- if: $CI_PIPELINE_SOURCE === 'push' && $CI_COMMIT_REF_NAME !== $CI_DEFAULT_BRANCH
when: always
- when: never
script:
- # Run npm install, etc., including those only needed in lower environments
# ...
Pull in Backend Dependencies Prod:
stage: build
rules:
- if: $CI_PIPELINE_SOURCE === 'push' && ($CI_COMMIT_REF_NAME === $CI_DEFAULT_BRANCH || $CI_COMMIT_TAG !== '')
when: always
- when: never
script:
- # run your dependency manager to pull in backend dependencies, ie Composer for PHP, without dev dependencies
# ...
Run Unit Tests:
stage: tests
rules:
- if: $CI_PIPELINE_SOURCE === 'push' && $CI_COMMIT_REF_NAME !== $CI_DEFAULT_BRANCH
when: always
- when: never
script:
- # run our unit tests
Deploy to production:
stage: deploy
rules:
- if: $CI_PIPELINE_SOURCE === 'push' && $CI_COMMIT_REF_NAME === $CI_DEFAULT_BRANCH
when: always
- when: never
script:
- # deploy steps to prod
etc.
For the first two jobs, the rules say "If the event is a Push AND the branch isn't the default branch, always run this job. Otherwise, if any of these conditions isn't true, never run this job".
The Prod job's rules say "If the event is a Push AND the branch IS the default branch, OR it's a tag, run this job. Otherwise never run it."
Then we only run our unit test job for feature branches, and we only deploy to production for the default branch.
Therefore, depending on which branch/tag is pushed to, the pipeline instance will look very different. For a feature branch we'll have 2 build jobs, and a test job (2 stages). For the default branch, we'll have a single build job and a deploy job.
The same is true if you need to handle sources other than 'push'. For example, if you have a job that only runs when triggered via the API (or from another pipeline instance or another project's pipeline), you'd look to see if the $CI_PIPELINE_SOURCE variable holds the string trigger.

How to run Job when Pipeline was triggered manually

I setup jobs to run only when pushing/merging to branch "dev", but I also want it so I'm able to run them if I trigger that pipeline manually. Something like this:
test:
stage: test
<this step should be run always>
build:
stage: build
rules:
- if: $CI_COMMIT_REF_NAME == "dev"
- if: <also run if the pipeline was run manually, but skip if it was triggered by something else>
This job is defined in a child "trigger" pipeline. This is how the parent looks like:
include:
- template: 'Workflows/MergeRequest-Pipelines.gitlab-ci.yml'
stages:
- triggers
microservice_a:
stage: triggers
trigger:
include: microservice_a/.gitlab-ci.microservice_a.yml
strategy: depend
rules:
- changes:
- microservice_a/*
The effect I want to achieve is:
Run test in all cases
Run build in the child pipeline only when pushing/merging to "dev"
Also run the build job when the pipeline is run maually
Do not run the build job on any other cases (like a MR)
The rules examples showcase:
job:
script: echo "Hello, Rules!"
rules:
- if: '$CI_PIPELINE_SOURCE == "merge_request_event"'
when: manual
allow_failure: true
- if: '$CI_PIPELINE_SOURCE == "schedule"'
The when:manual should be enough in your case: it does require that a job doesn’t run unless a user starts it.
Bonus question: This job is defined in a child "trigger" pipeline
Then it is related to gitlab-org/gitlab issue 201938, which is supposed to be fixed with GitLab 13.5 (Oct. 2020), but that only allow manual actions for parent-child pipeline (illustrated by this thread)
Double-check the environment variables as set in your child job
echo $CI_JOB_MANUAL
If true, that would indicate a job part of a manual triggered job.
While issue 22448 ("$CI_JOB_MANUAL should be set in all dependent jobs") points to this option not working, it includes a workaround.

Gitlab - Separating CI from Deployment

We are currently using Jenkins, and planning to migrate to Gitlab. We actually have 2 Jenkinsfiles in each repo, 1 is setup as a Multibranch pipeline and runs on all changes. Its is the merge check, that runs all the various linting, tests, building the docker containers etc. The second Jenkinsfile is only ran manually from Jenkins, it takes in all the various input parameters and it deploys the code. Which is mostly coming in from say, the linted Ansible/Terraform and selecting a docker image that would have already been built via the CI side of things.
I know gitlab doesnt support this model, but this project is already MVP'd so re-working how the dev's combined their logic and deployment code together is probably not going to happen.
Is it possibly, in 1 gitlab-ci.yml file to say run these jobs on merge/pushes and only run this on manual deployment .
e.g.
workflow:
rules:
- if: '$CI_PIPELINE_SOURCE == "merge_request_event"'
- if: '$CI_COMMIT_BRANCH && $CI_OPEN_MERGE_REQUESTS'
when: never
- if: '$CI_COMMIT_BRANCH'
stages:
- test
- test
- deploy
- destroy
test-python-job:
stage: test
rules:
- if: '$CI_PIPELINE_SOURCE == "push"'
- if: '$CI_PIPELINE_SOURCE == "merge_request_event"'
script:
- echo "Test Python"
- black
- bandit
- flake8
- tox
test-terraform-job:
stage: test
rules:
- if: '$CI_PIPELINE_SOURCE == "push"'
- if: '$CI_PIPELINE_SOURCE == "merge_request_event"'
script:
- echo "Test Terraform"
- terraform validate --yadda
test-ansible-job:
stage: test
rules:
- if: '$CI_PIPELINE_SOURCE == "push"'
- if: '$CI_PIPELINE_SOURCE == "merge_request_event"'
script:
- echo "Test Ansible"
- ansible-lint --yadda
deploy-job:
stage: deploy
variables:
DEPLOYMENT_ID: "Only deploy-job can use this variable's value"
secrets:
DATABASE_PASSWORD:
vault: production/db/password#ops
rules:
- when: manual
script:
- echo "Terraform Deploy"
- terraform deploy
- ansible-playbook yaddas
destroy-job:
stage: destroy
variables:
DEPLOYMENT_ID: "Only destroy-job can use this variable's value"
secrets:
DATABASE_PASSWORD:
vault: production/db/password#ops
rules:
- when: manual
script:
- terraform destroy
We have not even deployed gitlab yet, so im writing that off the top of my head, but I want to know what level of pain I am in for.
There are multiple options to achieve your goal with minimized configuration effort:
Working with private jobs and use inheritance or references for easier configuration - doable in one file
Extract parts into child pipelines for easier usage
reduced Configuration in one File
I assume you most hate that you have to redefine the rules for your jobs. There are two ways how you can reduce those duplication.
Inheritance
Can reduce a lot of duplication, but can also cause problems, with unintended side behaviour.
.test:
stage: test
rules:
- if: '$CI_PIPELINE_SOURCE == "push"'
- if: '$CI_PIPELINE_SOURCE == "merge_request_event"'
test-python-job:
extends: .test
script:
- echo "Test Python"
- black
- bandit
- flake8
- tox
test-terraform-job:
extends: .test
script:
- echo "Test Terraform"
- terraform validate --yadda
test-ansible-job:
extends: .test
script:
- echo "Test Ansible"
- ansible-lint --yadda
Composition
By using !reference you can combine certain aspects of jobs see https://docs.gitlab.com/ee/ci/yaml/#reference-tags
.test:
rules:
- if: '$CI_PIPELINE_SOURCE == "push"'
- if: '$CI_PIPELINE_SOURCE == "merge_request_event"'
test-python-job:
stage: test
rules:
- !reference [.test, rules]
script:
- echo "Test Python"
- black
- bandit
- flake8
- tox
test-terraform-job:
stage: test
rules:
- !reference [.test, rules]
script:
- echo "Test Terraform"
- terraform validate --yadda
stage: test
rules:
- !reference [.test, rules]
script:
- echo "Test Ansible"
- ansible-lint --yadda
Parent Child pipelines
Sometimes it might be also suitable to extract functionality into child pipelines. You can easier control what is happening at one stage, when you call the child pipeline and gain overview due to fewer lines of codes. It adds complexity to your builds. But generally it will generate a cleaner and easier to maintain ci structure (my opinion).
This approach will only add the child pipeline when needed - furthermore, you could also centralize this file, if it is similar for the deployments
.gitlab-ci.yml
deploy:
stage: deploy
trigger:
include:
- local: Deploy.gitlab-ci.yml
strategy: depend
rules:
- if: $CI_PIPELINE_SOURCE == 'web' #maybe also useful, as it will only happen on a web interaction
when: manual
- if: $CI_PIPELINE_SOURCE == 'schedule' #maybe also useful, for schedules
Deploy.gitlab-ci.yml
deploy-job:
stage: deploy
variables:
DEPLOYMENT_ID: "Only deploy-job can use this variable's value"
secrets:
DATABASE_PASSWORD:
vault: production/db/password#ops
script:
- echo "Terraform Deploy"
- terraform deploy
- ansible-playbook yaddas
destroy-job:
stage: destroy
variables:
DEPLOYMENT_ID: "Only destroy-job can use this variable's value"
secrets:
DATABASE_PASSWORD:
vault: production/db/password#ops
script:
- terraform destroy
Sidenotes
This is not 100% answering your question, but it shows you a lot of flexibility, and you will soon realize that mimicking jenkins is not 100% ideal. Eg having deployment jobs directly attached to a commit, and visible on that one, allows for a better overview of what was exactly deployed. If you need to run such things manually, i highly recommend to use schedules with preconfigured values, as they only have a play button. Also you have the artifacts already in place and build from your pipeline, why not add additional steps utilizing them, instead of providing this information
I hope my insights will be useful to you, and happy migration ;)

How to make gitlab run jobs in sequential order?

I have a gitlab-ci.yml file like the following:
stages:
- test
- job1
- job2
test:
stage: test
rules:
- if: '$CI_PIPELINE_SOURCE == "merge_request_event"'
script:
...
myjob1:
stage: job1
script:
...
myjob2:
stage: job2
script:
...
According to the documentation HERE (or at least how I understood it), the first stage/job is only run when I create a merge request.
This is true, but the next stage (job1) is run in parallel when the first job (test) has been started. As far as I understand the stages which are defined in the order test -> job1 -> job2) always run in sequence.
So what am I doing wrong? Why do job test and job1 run in parallel, and not in sequence as expected?
After a lot of trial-end-errors and reading and rereading parts of the really unclear and confusing documentation I might have found a solution.
First, the stage you only want to run on a merge request (or which you do not want to run if you schedule a trigger or start the pipeline manually), you need to change that stage as follows:
test:
rules:
- if: $CI_PIPELINE_SOURCE == "web" || $CI_PIPELINE_SOURCE == "schedule"
when: never
- when: on_success
stage: test
script:
- 'echo "Running Test"'
- 'echo $CI_PIPELINE_SOURCE'
Here, you define a rule that checks if the variable CI_PIPELINE_SOURCE is either web or schedule. If the variable is web this means a manual pipeline trigger (i.e. you pressed manually on Run pipeline, which is not explained in the documentation), or if the pipeline is triggered by a schedule (not tested, but I assume that is what schedule means).
So if the pipeline is triggered by a scheduled event or manually, never tells gitlab to not execute that stage. The when: on_success is like an else statement, which tells gitlab to run that stage in any other case.
However, that is not the complete story. Because when you use git to make changes to the code and push it to gitlab via git push, you have two triggers in gitlab! A trigger merge_request_event and a trigger push. That means, the pipeline is started twice!
To avoid the pipeline started twice you need to use the workflow key, which helps to define if the pipeline (=workflow) is run or not. (The term workflow seems to mean pipeline). Here is the code to be put into the gitlab-ci.yml file:
workflow:
rules:
- if: '$CI_PIPELINE_SOURCE == "merge_request_event"'
when: never
- when: always
This construct suppresses the pipeline to be run when the trigger is merge_request_event. In that case, the additional pipeline is not run. In all other cases (when the trigger is e.g. push), the pipeline is run.
So here is the complete gitlab-ci.yaml code:
workflow:
rules:
- if: '$CI_PIPELINE_SOURCE == "merge_request_event"'
when: never
- when: always
stages:
- test
- stage1
- stage2
test:
rules:
- if: $CI_PIPELINE_SOURCE == "web" || $CI_PIPELINE_SOURCE == "schedule"
when: never
- when: on_success
stage: test
script:
- 'echo "Running Test"'
my_stage1:
stage: stage1
script:
- 'echo "Running stage 1"'
my_stage2:
stage: stage2
script:
- 'echo "Running stage 2"'
If you make a git push then one pipeline is run with the stages test, my_stage1 and my_stage2, and when you start the pipeline manually or if it is triggered by a schedule, one pipeline is started with the stages my_stage1 and my_stage2.
As to why this is so complicated and confusing, I have not the slightest idea.

What is the purpose of "workflow:rules" in Gitlab-ci pipelines?

I am a bit confused between Gitlab CI pipeline workflow:rules and job:rules
workflow:
rules:
- if: '$CI_PIPELINE_SOURCE == "push"'
- if: '$CI_PIPELINE_SOURCE != "schedule"'
and
test:
stage: test
image: image
script:
- echo "Hello world!"
rules:
- if: $CI_PIPELINE_SOURCE == "schedule"
What happens if both of them were used in the same GitLab-ci YAML file.
With worfklow you configure when a pipeline is created while with rules you configure when a job is created.
So in your example pipelines are created for pushes but cannot be scheduled while your test job will only run when scheduled.
But as workflow rules take precedence over job rules, no pipeline will be created in your example as your rules for workflow and job are mutually exclusive.

Resources