Apply Resource_Groups to specific schedules? - gitlab

We're having an issue within our Gitlab instance that runs test jobs. We have approximately 30 scheduled jobs, which are either triggered manually or by the API. out of those jobs, about 10 are specific to a ci/cd pipeline, and they get triggered all the time by merges/commits. What we'd like to do is use resource_group, but only apply that setting to those specific jobs.
when I add "resource_group: runtest" to our yml file, it applies to ALL our scheduled pipelines. Is there a way to apply it to just a specific set of schedules? Maybe by using a tag or specifc naming convention?
Dan

I think you would need to create extra jobs in your GitLab config so that you have some with resource_group and some without and then choose which to include in the pipeline using an environment variable input when the pipeline is triggered.
job with resource group:
script: echo "Hello!"
rules:
- if: $MY_VARIABLE = "ThisIsASchedule"
when: always
resource_group: production
job without resource group:
script: echo "Hello!"
rules:
- if: $MY_VARIABLE != "ThisIsASchedule"
when: never

Thanks for the direction Glen Thomas. Was able to get it to work.
had to refactor our gitlab-ci.yml file a bit to organize it better, now have two jobs that run the tests. One checks to see if the variable is false, and if so, runs the jobs as normal. the 2nd job checks if the variable is true, and if so, runs the jobs, but with the added resource_group setting so those jobs wait for the previous job to finish before kicking off .
so:
job1 :
script: echo "regular schedules"
if: $Variable != "True"
do all the test things here;
job2:
script echo "group schedules"
if: $Variable == "True"
resource_group: group
do all the test things here

Related

Non-constant Variables in Gitlab Pipelines

Surely many of you have encountered this and I would like to share my hacky solution. Essentially during the CI / CD process of a Gitlab pipeline most parameters are passed through "Variables". There are two issues that I've encountered with that.
Those parameters cannot be altered in Realtime - Say I'd want to execute jobs based on information from previous jobs it would always need to be saved in the cache as opposed to written to CI / CD variables
The execution of jobs is evaluated before the script, so the "rules" will only ever apply to the original parameters. Trouble arises when those are only available during runtime.
For complex pipelines one would want to pick and choose the tests automatically without having to respecify parameters every time. In my case I delivered a data product and depending on the content different steps had to be taken. How do we deal with those issues?
Changing parameters real-time:
https://docs.gitlab.com/ee/api/project_level_variables.html This API provides users with a way of interacting with the CI / CD variables. This will not work for any variables defined at the head of the YML files under the Variables tag. Rather this is a way to access the "Custom CI/CD Variables" https://docs.gitlab.com/ee/ci/variables/#custom-cicd-variables found under this link. This way custom variables can be created altered and deleted during running a pipeline. The only thing needed is a PRIVATE-TOKEN that has access rights to the API (I believe my token has all the rights).
job:
stage: example
script:
- 'curl --request PUT --header "PRIVATE-TOKEN: $ACCESS_TOKEN" "${CI_API_V4_URL}/projects/${CI_PROJECT_ID}/variables/VARIABLE_NAME" --form "value=abc"'
Onto the next problem. Altering the variables won't let us actually control downstream jobs like this because of the fact that the "rules" block is executed before the pipeline is actually run. Hence it will use the variable before the curl request is sent.
job2:
stage: after_example
rules:
- if: $VARIABLE_NAME == "abc"
script:
- env
The way to avoid that is child pipelines. Child pipelines are initialized inside the parent pipeline and check the environment variables anew. A full example should illustrate my point.
variables:
PARAMETER: "Cant be changed"
stages:
- example
- after_example
- finally
job_1:
# Changing "VARIABLE_NAME" during runtime to "abc", VARIABLE_NAME has to exist
stage: example
script:
- 'curl --request PUT --header "PRIVATE-TOKEN: $ACCESS_TOKEN" "${CI_API_V4_URL}/projects/${CI_PROJECT_ID}/variables/VARIABLE_NAME" --form "value=abc"'
job_2.1:
# This wont get triggered as we assume "abc" was not the value in VARIABLE_NAME before job_1
stage: after_example
rules:
- if: $VARIABLE_NAME == "abc"
script:
- env
job_3:
stage: after_example
trigger:
include:
- local: .donwstream.yml
strategy: depend
job_4:
stage: finally
script:
- echo "Make sure to properly clean up your variables to a default value"
# inside downstream.yml
stages:
- downstream
job_2.2:
# This will happen because the pipeline is initialized after job_1
stage: downstream
rules:
- if: $VARIABLE_NAME == "abc"
script:
- env
This coding bit probably won't run, however it exemplifies my point rather nicely. Job 2 should be executed based on an action that happens in Job 1. While the variables will be updated once we reach job 2.1, the rules check happens before so it will never be executed. Child pipelines do the rule check during the runtime of the Parent pipeline, this is why job 2.2 does run.
This variant is quite hacky and probably really inefficient, but for all intents and purposes it gets the job done.

Add GitLab CI job to pipeline based on script command result

I have a GitLab CI pipeline with a 'migration' job which I want to be added only if certain files changed between current commit and master branch, but in my current project I'm forced to use GitLab CI pipelines for push event which complicates things.
Docs on rules:changes clearly states that it will glitch and will not work properly without MR (my case of push event), so that's out of question.
Docs on rules:if states that it only works with env variables. But docs on passing CI/CD variables to another job clearly states that
These variables cannot be used as CI/CD variables to configure a
pipeline, but they can be used in job scripts.
So, now I'm stuck. I can just skip running the job in question overriding the script and checking for file changes, but what I want is not adding the job in question to pipeline in first place.
While you can't add a job alone to a pipeline based on the output of a script, you can add a child pipeline dynamically based on the script output. The method of using rules: with dynamic variables won't work because rules: are evaluated at the time the pipeline is created, as you found in the docs.
However, you can achieve the same effect using dynamic child-pipelines feature. The idea is you dynamically create the YAML for the desired pipeline in a job. That YAML created by your job will be used to create a child pipeline, which your pipeline can depend on.
Sadly, to add/remove a Gitlab job based on variables created from a previous job is not possible for a given pipeline
A way to achieve this is if your break your current pipeline to an upstream and downstream
The upstream will have 2 jobs
The first one will use your script to define a variable
This job will trigger the downstream, passing this variable
Upstream
check_val:
...
script:
... Script imposes the logic with the needed checks
... If true
- echo "MY_CONDITIONAL_VAR=true" >> var.env
... If false
- echo "MY_CONDITIONAL_VAR=false" >> var.env
artifacts:
reports:
dotenv: var.env
trigger_your_original_pipeline:
...
variables:
MY_CONDITIONAL_VAR: "$MY_CONDITIONAL_VAR"
trigger:
project: "project_namespance/project"
The downstream would be your original pipeline
Downstream
...
migration:
...
rules:
- if: '$MY_CONDITIONAL_VAR == "true"'
Now the MY_CONDITIONAL_VAR will be available at the start of the pipeline, so you can impose rules to add or not the migration job

How to run gitlab stage periodically?

How can I run a gitlab stage periodically? I am aware of pipeline schedules documented here: https://docs.gitlab.com/ee/ci/pipelines/schedules.html. I am only interested in running a particular test stage of a branch. The issue is the branch I want to target has a lot of other stages involved with it. One option might be introducing a variable in the script portion that says if true then execute the script. However, this can be cumbersome and require a lot of changes to all the stages of the ci file for that particular branch under consideration.
You can control when your jobs run using the only/except keywords, or the more advanced rules keyword along with the pipeline source variable.
Example with only:
scheduled_test:
stage: tests
image: my_image:latest
only:
- schedules
script:
- ./run_some_things.sh
The only keyword lets you define some conditions that mean "this job only runs if these conditions are true", and offers a shorthand to check the source of the pipeline. schedules means that the pipeline was started from a schedule rather than a push, trigger, etc. The except keyword is just the opposite. If it had except: schedules the job would always run except if it was scheduled. You can see the full documentation for the only/except keywords here: https://docs.gitlab.com/ee/ci/yaml/#onlyexcept-basic
As of Gitlab version 12.3, the rules keyword extends the possibilities of the only/except options. We could get the same result from the example above like this:
scheduled_test:
stage: tests
image: my_image:latest
rules:
- if: '$CI_PIPELINE_SOURCE == "schedule"'
when: always
- if: '$CI_PIPELINE_SOURCE == "push"'
when: never
In this example, we check the predefined variable $CI_PIPELINE_SOURCE to see what started this pipeline. If it's "schedule", we always run this job. As an example, if the source is "push" (so the pipeline was started by a git push command), this job will never run. With the rules keyword, all if statements are OR'ed together, so the example above reads, if the source is schedule, always run OR if the source is a push, never run. However, you can AND multiple conditionals together in the same if:
rules:
- if: '$CI_PIPELINE_SOURCE == "push" && $MY_CUSTOM_VARIABLE == true'
when: manual
You can read the full documentation for the rules keyword here: https://docs.gitlab.com/ee/ci/yaml/#rules

Gitlab Runner to run jobs on selected runners according to custom variable

I am planning to schedule jobs on runners based on variable from pipeline web UI. I have two runners registered for a project and currently both are on same machine for now.Both runners are having the same tags so differentiation from tags is not possible too.
This is my yaml file
stages :
- common
- specific
test1:
stage: common
tags:
- runner
- linux
script:
- echo $CI_RUNNER_DESCRIPTION
- echo "This is run on all runners"
test2:
stage: specific
tags:
- runner
- linux
script: echo "This is run on runner 1"
only:
variables:
- $num == $CI_RUNNER_DESCRIPTION
test3:
stage: specific
tags:
- runner
- linux
script: echo "This is run on runner 2"
only:
variables:
- $num == $CI_RUNNER_DESCRIPTION
So variable on which the selection happens is "num". The description of the runners can be 1 or 2.
The default value for num is 0 and according to variable passed from pipeline UI, the jobs are to be selected and run.
But when I execute the test with num 1 or 2, only test1 gets executed which is common to all runners.
Is such a implementation possible or am I facing issue because the runners are on same machine ?
Using variables only is not the right approach to select runners. Right now you select one of the 2 runners and try not to execute job if runner description doesn't match. Even if this worked it wouldn't make much sense as you don't know if the job will be executed or not.
I suggest you add specific labels for runners and specify which job should run on which runner.
test2:
stage: specific
tags:
- runner_1
script: echo "This is run on runner 1"
test3:
stage: specific
tags:
- runner_2
script: echo "This is run on runner 2"
If you want to select runner for the job based on UI input, maybe you can add variable to tags section. To be honest I didn't test such solution and donno if it's gonna work. Yet still it would require to create specific tags for your runners.
tags:
- $tag_variable_selected_from_UI
You cannot do this currently with gitlab. As you noted in your comment on the accepted answer, this will be available when https://gitlab.com/gitlab-org/gitlab/-/issues/35742 is resolved.
That being said, while the "accepted" answer correctly states "only" is the incorrect way to do this, it suggests another way that also doesnt work.
Once dynamic tagging support is added into gitlab-ci, this will be possible. Currently, the solution is to provide unique jobs each with their own tags.
Depending on what exactly you are trying to accomplish, you can keep your code as DRY as possible for now by adding job templates in combination with extends to reduce duplication to a couple of lines per job.

How can I trigger a job with a manual click OR a commit message

We have a job (deploy to production) that we generally manually click after checking that build on staging. However, very occasionally we have an issue that we've accidentally deployed and want to get a fix out ASAP. In those case we run the tests locally (much faster) and put [urgent-fix] in our commit message to stop the tests running in CI (skipping straight to Docker image build and staging deploy).
What we'd like to do is if we put [urgent-fix] it automatically triggers the production deploy (usually a when: manual step). Can we achieve this somehow?
Sounds like you can use a combination of the only:variables syntax and $CI_COMMIT_MESSAGE predefined variable.
A rough idea (untested):
.deploy_production: &deploy_production
stage: deploy production
script:
- echo "I'm deploy production here"
tags:
- some special tag
deploy::manual:
<< *deploy_production
when: manual
allow_failure: false
deploy:urgent_fix:
<< *deploy_production
only:
variables:
- $CI_COMMIT_MESSAGE =~/[urgent-fix]/
As of GitLab v12.3 (~September 2019) GitLab comes with "Flexible Rules for CI Build config". The feature is intended to replace the only/except functionality and is fully documented here.
With rules: you can now fully influence the when: behaviour of your job based on various conditions (in contrast to only/except: which forced you to create separate jobs for situations like the one described in the OP; see accepted answer).
For example you can do:
deploy:
rules:
- if: '$CI_COMMIT_TITLE =~ /urgent-fix/'
when: on_success
- when: manual # default fallback
script:
- sh deploy.sh
One thing to highlight is that in the above example I used $CI_COMMIT_TITLE instead of $CI_COMMIT_MESSAGE (see gitlab docs) to avoid the string "urgent-fix" reoccuring in a commit message automatically assembled in the course of a git merge, which would then accidentally retrigger the job.
Disclaimer: Please be aware that the new rules: feature stands in conflict with only/except: and thus requires you to remove only/except: occurences. Also, please note that rules: should only be used in combination with workflow: (read the docs) to avoid unwanted "detached" pipelines being triggered.

Resources