Gitlab Runner to run jobs on selected runners according to custom variable - gitlab

I am planning to schedule jobs on runners based on variable from pipeline web UI. I have two runners registered for a project and currently both are on same machine for now.Both runners are having the same tags so differentiation from tags is not possible too.
This is my yaml file
stages :
- common
- specific
test1:
stage: common
tags:
- runner
- linux
script:
- echo $CI_RUNNER_DESCRIPTION
- echo "This is run on all runners"
test2:
stage: specific
tags:
- runner
- linux
script: echo "This is run on runner 1"
only:
variables:
- $num == $CI_RUNNER_DESCRIPTION
test3:
stage: specific
tags:
- runner
- linux
script: echo "This is run on runner 2"
only:
variables:
- $num == $CI_RUNNER_DESCRIPTION
So variable on which the selection happens is "num". The description of the runners can be 1 or 2.
The default value for num is 0 and according to variable passed from pipeline UI, the jobs are to be selected and run.
But when I execute the test with num 1 or 2, only test1 gets executed which is common to all runners.
Is such a implementation possible or am I facing issue because the runners are on same machine ?

Using variables only is not the right approach to select runners. Right now you select one of the 2 runners and try not to execute job if runner description doesn't match. Even if this worked it wouldn't make much sense as you don't know if the job will be executed or not.
I suggest you add specific labels for runners and specify which job should run on which runner.
test2:
stage: specific
tags:
- runner_1
script: echo "This is run on runner 1"
test3:
stage: specific
tags:
- runner_2
script: echo "This is run on runner 2"
If you want to select runner for the job based on UI input, maybe you can add variable to tags section. To be honest I didn't test such solution and donno if it's gonna work. Yet still it would require to create specific tags for your runners.
tags:
- $tag_variable_selected_from_UI

You cannot do this currently with gitlab. As you noted in your comment on the accepted answer, this will be available when https://gitlab.com/gitlab-org/gitlab/-/issues/35742 is resolved.
That being said, while the "accepted" answer correctly states "only" is the incorrect way to do this, it suggests another way that also doesnt work.
Once dynamic tagging support is added into gitlab-ci, this will be possible. Currently, the solution is to provide unique jobs each with their own tags.
Depending on what exactly you are trying to accomplish, you can keep your code as DRY as possible for now by adding job templates in combination with extends to reduce duplication to a couple of lines per job.

Related

Non-constant Variables in Gitlab Pipelines

Surely many of you have encountered this and I would like to share my hacky solution. Essentially during the CI / CD process of a Gitlab pipeline most parameters are passed through "Variables". There are two issues that I've encountered with that.
Those parameters cannot be altered in Realtime - Say I'd want to execute jobs based on information from previous jobs it would always need to be saved in the cache as opposed to written to CI / CD variables
The execution of jobs is evaluated before the script, so the "rules" will only ever apply to the original parameters. Trouble arises when those are only available during runtime.
For complex pipelines one would want to pick and choose the tests automatically without having to respecify parameters every time. In my case I delivered a data product and depending on the content different steps had to be taken. How do we deal with those issues?
Changing parameters real-time:
https://docs.gitlab.com/ee/api/project_level_variables.html This API provides users with a way of interacting with the CI / CD variables. This will not work for any variables defined at the head of the YML files under the Variables tag. Rather this is a way to access the "Custom CI/CD Variables" https://docs.gitlab.com/ee/ci/variables/#custom-cicd-variables found under this link. This way custom variables can be created altered and deleted during running a pipeline. The only thing needed is a PRIVATE-TOKEN that has access rights to the API (I believe my token has all the rights).
job:
stage: example
script:
- 'curl --request PUT --header "PRIVATE-TOKEN: $ACCESS_TOKEN" "${CI_API_V4_URL}/projects/${CI_PROJECT_ID}/variables/VARIABLE_NAME" --form "value=abc"'
Onto the next problem. Altering the variables won't let us actually control downstream jobs like this because of the fact that the "rules" block is executed before the pipeline is actually run. Hence it will use the variable before the curl request is sent.
job2:
stage: after_example
rules:
- if: $VARIABLE_NAME == "abc"
script:
- env
The way to avoid that is child pipelines. Child pipelines are initialized inside the parent pipeline and check the environment variables anew. A full example should illustrate my point.
variables:
PARAMETER: "Cant be changed"
stages:
- example
- after_example
- finally
job_1:
# Changing "VARIABLE_NAME" during runtime to "abc", VARIABLE_NAME has to exist
stage: example
script:
- 'curl --request PUT --header "PRIVATE-TOKEN: $ACCESS_TOKEN" "${CI_API_V4_URL}/projects/${CI_PROJECT_ID}/variables/VARIABLE_NAME" --form "value=abc"'
job_2.1:
# This wont get triggered as we assume "abc" was not the value in VARIABLE_NAME before job_1
stage: after_example
rules:
- if: $VARIABLE_NAME == "abc"
script:
- env
job_3:
stage: after_example
trigger:
include:
- local: .donwstream.yml
strategy: depend
job_4:
stage: finally
script:
- echo "Make sure to properly clean up your variables to a default value"
# inside downstream.yml
stages:
- downstream
job_2.2:
# This will happen because the pipeline is initialized after job_1
stage: downstream
rules:
- if: $VARIABLE_NAME == "abc"
script:
- env
This coding bit probably won't run, however it exemplifies my point rather nicely. Job 2 should be executed based on an action that happens in Job 1. While the variables will be updated once we reach job 2.1, the rules check happens before so it will never be executed. Child pipelines do the rule check during the runtime of the Parent pipeline, this is why job 2.2 does run.
This variant is quite hacky and probably really inefficient, but for all intents and purposes it gets the job done.

Apply Resource_Groups to specific schedules?

We're having an issue within our Gitlab instance that runs test jobs. We have approximately 30 scheduled jobs, which are either triggered manually or by the API. out of those jobs, about 10 are specific to a ci/cd pipeline, and they get triggered all the time by merges/commits. What we'd like to do is use resource_group, but only apply that setting to those specific jobs.
when I add "resource_group: runtest" to our yml file, it applies to ALL our scheduled pipelines. Is there a way to apply it to just a specific set of schedules? Maybe by using a tag or specifc naming convention?
Dan
I think you would need to create extra jobs in your GitLab config so that you have some with resource_group and some without and then choose which to include in the pipeline using an environment variable input when the pipeline is triggered.
job with resource group:
script: echo "Hello!"
rules:
- if: $MY_VARIABLE = "ThisIsASchedule"
when: always
resource_group: production
job without resource group:
script: echo "Hello!"
rules:
- if: $MY_VARIABLE != "ThisIsASchedule"
when: never
Thanks for the direction Glen Thomas. Was able to get it to work.
had to refactor our gitlab-ci.yml file a bit to organize it better, now have two jobs that run the tests. One checks to see if the variable is false, and if so, runs the jobs as normal. the 2nd job checks if the variable is true, and if so, runs the jobs, but with the added resource_group setting so those jobs wait for the previous job to finish before kicking off .
so:
job1 :
script: echo "regular schedules"
if: $Variable != "True"
do all the test things here;
job2:
script echo "group schedules"
if: $Variable == "True"
resource_group: group
do all the test things here

What does colon in job name means when using Gitlab CI pipeline?

Using the following CI pipeline running on GitLab:
stages:
- build
- website
default:
retry: 1
timeout: 15 minutes
build:website:
stage: build
...
...
...
...
website:dev:
stage: website
...
...
...
What does the first colon in job name in build:website: and in website:dev: exactly mean?
Is it like we pass the second part after the stage name as a variable to the stage?
Naming of jobs does not really change the behavior of the pipeline in this case. It's just the job name.
However, if you use the same prefix before the : for multiple jobs, it will cause jobs to be grouped in the UI. It still doesn't affect the material function of the pipeline, but it will change how they show up in the UI:
It's a purely cosmetic feature.
Jobs can also be grouped using / as the separator or a space.

How to run multiple stages in the same container in gitlab?

I've 3 stages:
- provision
- cpp tests
- python tests
I need to run provision before running tests. Gitlab suggests using artifacts to pass result between stages but I'm afraid it's not possible in my scenario since ansible does lots of different stuff (not just generate a few config files/binaries). Ideally, I'd like to be able to run all three stages in the same container because in my scenario stages are logical and essentially can be merged into one. I'd want to avoid it as this would make .gitlab-ci.yml harder to understand.
If you have 3 tasks which can be merged into one and what you want to achieve is only to have 3 separated functions running in the same container to make the .gitlab-ci.yml file easier to understand, I would recommend using yaml anchors (see below).
.provision_template: &provision_definition
- XXX
.cpp_tests_template: &cpp_tests_definition
- YYY
.python_tests_template: &python_tests_definition
- ZZZ
my_job:
script:
- *provision_definition
- *cpp_tests_definition
- *python_tests_definition

How can I trigger a job with a manual click OR a commit message

We have a job (deploy to production) that we generally manually click after checking that build on staging. However, very occasionally we have an issue that we've accidentally deployed and want to get a fix out ASAP. In those case we run the tests locally (much faster) and put [urgent-fix] in our commit message to stop the tests running in CI (skipping straight to Docker image build and staging deploy).
What we'd like to do is if we put [urgent-fix] it automatically triggers the production deploy (usually a when: manual step). Can we achieve this somehow?
Sounds like you can use a combination of the only:variables syntax and $CI_COMMIT_MESSAGE predefined variable.
A rough idea (untested):
.deploy_production: &deploy_production
stage: deploy production
script:
- echo "I'm deploy production here"
tags:
- some special tag
deploy::manual:
<< *deploy_production
when: manual
allow_failure: false
deploy:urgent_fix:
<< *deploy_production
only:
variables:
- $CI_COMMIT_MESSAGE =~/[urgent-fix]/
As of GitLab v12.3 (~September 2019) GitLab comes with "Flexible Rules for CI Build config". The feature is intended to replace the only/except functionality and is fully documented here.
With rules: you can now fully influence the when: behaviour of your job based on various conditions (in contrast to only/except: which forced you to create separate jobs for situations like the one described in the OP; see accepted answer).
For example you can do:
deploy:
rules:
- if: '$CI_COMMIT_TITLE =~ /urgent-fix/'
when: on_success
- when: manual # default fallback
script:
- sh deploy.sh
One thing to highlight is that in the above example I used $CI_COMMIT_TITLE instead of $CI_COMMIT_MESSAGE (see gitlab docs) to avoid the string "urgent-fix" reoccuring in a commit message automatically assembled in the course of a git merge, which would then accidentally retrigger the job.
Disclaimer: Please be aware that the new rules: feature stands in conflict with only/except: and thus requires you to remove only/except: occurences. Also, please note that rules: should only be used in combination with workflow: (read the docs) to avoid unwanted "detached" pipelines being triggered.

Resources