Using the following CI pipeline running on GitLab:
stages:
- build
- website
default:
retry: 1
timeout: 15 minutes
build:website:
stage: build
...
...
...
...
website:dev:
stage: website
...
...
...
What does the first colon in job name in build:website: and in website:dev: exactly mean?
Is it like we pass the second part after the stage name as a variable to the stage?
Naming of jobs does not really change the behavior of the pipeline in this case. It's just the job name.
However, if you use the same prefix before the : for multiple jobs, it will cause jobs to be grouped in the UI. It still doesn't affect the material function of the pipeline, but it will change how they show up in the UI:
It's a purely cosmetic feature.
Jobs can also be grouped using / as the separator or a space.
Related
I have a GitLab CI pipeline with a 'migration' job which I want to be added only if certain files changed between current commit and master branch, but in my current project I'm forced to use GitLab CI pipelines for push event which complicates things.
Docs on rules:changes clearly states that it will glitch and will not work properly without MR (my case of push event), so that's out of question.
Docs on rules:if states that it only works with env variables. But docs on passing CI/CD variables to another job clearly states that
These variables cannot be used as CI/CD variables to configure a
pipeline, but they can be used in job scripts.
So, now I'm stuck. I can just skip running the job in question overriding the script and checking for file changes, but what I want is not adding the job in question to pipeline in first place.
While you can't add a job alone to a pipeline based on the output of a script, you can add a child pipeline dynamically based on the script output. The method of using rules: with dynamic variables won't work because rules: are evaluated at the time the pipeline is created, as you found in the docs.
However, you can achieve the same effect using dynamic child-pipelines feature. The idea is you dynamically create the YAML for the desired pipeline in a job. That YAML created by your job will be used to create a child pipeline, which your pipeline can depend on.
Sadly, to add/remove a Gitlab job based on variables created from a previous job is not possible for a given pipeline
A way to achieve this is if your break your current pipeline to an upstream and downstream
The upstream will have 2 jobs
The first one will use your script to define a variable
This job will trigger the downstream, passing this variable
Upstream
check_val:
...
script:
... Script imposes the logic with the needed checks
... If true
- echo "MY_CONDITIONAL_VAR=true" >> var.env
... If false
- echo "MY_CONDITIONAL_VAR=false" >> var.env
artifacts:
reports:
dotenv: var.env
trigger_your_original_pipeline:
...
variables:
MY_CONDITIONAL_VAR: "$MY_CONDITIONAL_VAR"
trigger:
project: "project_namespance/project"
The downstream would be your original pipeline
Downstream
...
migration:
...
rules:
- if: '$MY_CONDITIONAL_VAR == "true"'
Now the MY_CONDITIONAL_VAR will be available at the start of the pipeline, so you can impose rules to add or not the migration job
I want to perform tests using gitlab CI using pytest-testmon. Unfortunately it seems that the generated file .testmondata is not saved in the gitlab-runner instances, so that no recollection of what was tested is saved.
Is there a way of saving this .testmondata and using it (for every different branch at test) in order to avoid repeating tests of unchanged code when code is pushed to those branches?
You can upload files generated during your job to Gitlab so you can view them later by downloading them, viewing on a Merge Request, or using them in other jobs by using the artifacts keyword. Here's a simple example:
tests:
stage: tests
script:
- ./run_tests.sh
artifacts:
paths:
- .testmondata
name: "$CI_COMMIT_REF_NAME_tests_testmondata"
expose_as: 'Tests Results'
expire_in: 3 months
In this example, after your script section (and before_script or after_script if present) runs, gitlab-runner will look for a file/directory that matches the entries in the paths array. If none are found, it will throw an error.
The runner will then upload the artifacts to Gitlab using the name in the name field. It's good practice to use some of the predefined variables that Gitlab CI provides so you can easily identify which pipeline and job an artifact came from, if necessary. In this example the name has the branch or tag name, and then _tests_testmondata.
The next attribute, expose_as lets you put a link on a Merge Request for this branch to display the artifacts. Without expose_as, you'll have to open the Pipeline and Job to view the artifacts.
Next, expire_in lets you define when the artifacts from this job should expire. If you don't set this attribute, the artifacts will expire based on the Gitlab server's setting, which by default is 30 days. You can supply a number of seconds as an int (3600, or one hour), never (to never expire), or many other human-readable formats like 3 years 8 months 28 days or 46 months.
You can view all the available predefined variables here: https://docs.gitlab.com/ee/ci/variables/predefined_variables.html, and you can view all the options for the artifacts keyword here: https://docs.gitlab.com/ee/ci/yaml/#artifacts
When you work on your .gitlab-ci.yml for a big project, for example having a time consuming testing stage causes a lot of delay. Is there an easy way to disable that stage, as just removing it from the stages definition, will make the YAML invalid from Gitlab's point of view (since there's a defined but unused stage), and in my case results in:
test job: chosen stage does not exist; available stages are .pre, build, deploy, .post
Since YAML does not support block comments, you'd need to comment out every line of the offending stage.
Are there quicker ways?
You could disable all the jobs from your stage using this trick of starting the job name with a dot ('.'). See https://docs.gitlab.com/ee/ci/jobs/index.html#hide-jobs for more details.
.hidden_job:
script:
- run test
There is a way to disable individual jobs (but not stages) like this:
test:
stage: test
when: manual
The jobs are skipped by default, but still can be triggered in the UI:
Also possible with rules and when as below:
test:
stage: test
rules:
- when: never
So far, the easiest way I've found is to use a rules definition like so:
test:
stage: test
rules:
- if: '"1" != "1"'
(...)
This still feels a bit odd, so if you have a better solution, I'll gladly accept another answer.
I am planning to schedule jobs on runners based on variable from pipeline web UI. I have two runners registered for a project and currently both are on same machine for now.Both runners are having the same tags so differentiation from tags is not possible too.
This is my yaml file
stages :
- common
- specific
test1:
stage: common
tags:
- runner
- linux
script:
- echo $CI_RUNNER_DESCRIPTION
- echo "This is run on all runners"
test2:
stage: specific
tags:
- runner
- linux
script: echo "This is run on runner 1"
only:
variables:
- $num == $CI_RUNNER_DESCRIPTION
test3:
stage: specific
tags:
- runner
- linux
script: echo "This is run on runner 2"
only:
variables:
- $num == $CI_RUNNER_DESCRIPTION
So variable on which the selection happens is "num". The description of the runners can be 1 or 2.
The default value for num is 0 and according to variable passed from pipeline UI, the jobs are to be selected and run.
But when I execute the test with num 1 or 2, only test1 gets executed which is common to all runners.
Is such a implementation possible or am I facing issue because the runners are on same machine ?
Using variables only is not the right approach to select runners. Right now you select one of the 2 runners and try not to execute job if runner description doesn't match. Even if this worked it wouldn't make much sense as you don't know if the job will be executed or not.
I suggest you add specific labels for runners and specify which job should run on which runner.
test2:
stage: specific
tags:
- runner_1
script: echo "This is run on runner 1"
test3:
stage: specific
tags:
- runner_2
script: echo "This is run on runner 2"
If you want to select runner for the job based on UI input, maybe you can add variable to tags section. To be honest I didn't test such solution and donno if it's gonna work. Yet still it would require to create specific tags for your runners.
tags:
- $tag_variable_selected_from_UI
You cannot do this currently with gitlab. As you noted in your comment on the accepted answer, this will be available when https://gitlab.com/gitlab-org/gitlab/-/issues/35742 is resolved.
That being said, while the "accepted" answer correctly states "only" is the incorrect way to do this, it suggests another way that also doesnt work.
Once dynamic tagging support is added into gitlab-ci, this will be possible. Currently, the solution is to provide unique jobs each with their own tags.
Depending on what exactly you are trying to accomplish, you can keep your code as DRY as possible for now by adding job templates in combination with extends to reduce duplication to a couple of lines per job.
Is there any way to extend and run only a specific job from another pipeline in my current pipeline without copy-pasting it?
For example I have two pipelines:
1. build -> code_check -> auto_test -> deploy
2. auto_test* -> report
I want to execute pipeline 2 where auto_test* executes on another runner while keeping the job's keys exactly as they are in pipeline 1 (except for "tags" which I add in the job to be able to use another runner).
I have a process restriction that I can't change anything in pipeline 1 config so I need a way to execute only a specific job.
I have tried to do that through include .gitlab-ci.yaml+extends:. It somewhat works but pipeline 2 will have all jobs from both pipelines and it is not what I would like to see.
The most straightforward way would be just to copy on each update auto_test job specification from pipeline 1 into my gitlab-ci YAML of pipeline 2 and adding tags: ["MyRunner"] but I hoped there is a built-in way to do that.