Override a defined CI/CD variable in a gitlab job - gitlab

i am currently facing an issue using Gitlab-ci. i am using gitlab 13.12 community edition.
I want to be able to override the value of a CICD variable from within a gitlab-ci.yml job.
you might say that i could just pass out the value from one job to another but the goal is to change that value so that , on the next pipeline , all my jobs will use the updated one.
To be precise , i want to be able to change MY_CICD_VARIABLE_TO_CHANGE that i've defined in project > cicd > variables. and update this value from a gitlab-ci job
something like this but not only change it for the current pipeline but update it :
change_variable_value:
stage: "change_variable"
image: myimage
script:
- $MY_CICD_VARIABLE_TO_CHANGE="value_changed"
, i tried to do every single solutions specify here :
https://docs.gitlab.com/13.12/ee/ci/variables/index.html#override-a-defined-cicd-variable
nothing seems to work.
I also tried to use artefact but it seems that i am not able to pass artefact between 2 distinct pipelines, they have to be part of the same pipeline , even by curling the api (at least in the community edition)
Any idea is more than welcome :)

Related

How to exclude gitlab CI job from pipeline based on a value in a project file?

I need to exclude a job from pipeline in case my project version is pre-release.
How I know it's a pre-release?
I set the following in the version info file, that all project files and tools use:
version = "1.2.3-pre"
From CI script, I parse the file, extract the version value, and know whether it's a pre-release or not, and can set the result in an environment variable.
The only way I know to exclude a job from pipeline is to use rules, while, I know also from gitlab docs that:
rules are evaluated before any jobs run
before_script also is claimed to be called with the script, i.e. after applying the rules.
I can stop the job, only after it starts from the script itself, based on the version value, but what I need is to extract the job from the pipeline in the first place, so it's not displayed in the pipeline history. Any idea?
Thanks
How do you run (start) your pipeline, and is the information whether "it's a pre-release" already known at this point?
If yes, then you could add a flag like IS_PRERELEASE as a variable to the pipeline, and use that in the rules: section of your job. The drawback is that this will not work with automatic pipelines (triggered by a commit or MR); but you can use this approach with manually triggered pipelines (https://docs.gitlab.com/ee/ci/variables/#override-a-variable-when-running-a-pipeline-manually) or via the API (https://docs.gitlab.com/ee/ci/triggers/#pass-cicd-variables-in-the-api-call).

How to read labels in Gitlab CI script

I have a few use cases in my Gitlab setup I would like to be able to support:
If a certain label (let's call it “skip_build”) is set, the deployment steps should not be run when I merge an MR to a main branch. This would be useful when we have multiple MRs being merged right after another and only need the last one built.
If another label (we'll call it “skip_tests”) is set, I should be able to read it as an env var from within the script and alter the flow within the script accordingly (using normal bash syntax), e.g. to alter the package command parameters used a bit. This is useful for small changes where it might not make sense to run a lengthy test suite.
Is this possible with Gitlab, and if so, how?
I’ve tried experimenting with CI_MERGE_REQUEST_LABELS, but it doesn’t seem to be able to read that as an env var from within the script.
You have to use merge request pipelines for the CI_MERGE_REQUEST_LABELS variable (and other MR-related variables) to be present as documented in predefined variables.
You could use a rules: clause to skip jobs. Something like
build:
rules: # only run this job if the regex pattern does not match
- if: $CI_MERGE_REQUEST_LABELS !~ /skip_build/
You can also do this on any other kind of predefined (or user-defined) variable, like branch name, commit messages, MR titles, etc. Whatever works for you.
For example, a built in feature of GitLab is that if your commit message contains [ci skip] it will prevent the pipeline from running. You could implement similar functionality for your jobs and/or pipelines through rules: or workflow:rules:.

Add GitLab CI job to pipeline based on script command result

I have a GitLab CI pipeline with a 'migration' job which I want to be added only if certain files changed between current commit and master branch, but in my current project I'm forced to use GitLab CI pipelines for push event which complicates things.
Docs on rules:changes clearly states that it will glitch and will not work properly without MR (my case of push event), so that's out of question.
Docs on rules:if states that it only works with env variables. But docs on passing CI/CD variables to another job clearly states that
These variables cannot be used as CI/CD variables to configure a
pipeline, but they can be used in job scripts.
So, now I'm stuck. I can just skip running the job in question overriding the script and checking for file changes, but what I want is not adding the job in question to pipeline in first place.
While you can't add a job alone to a pipeline based on the output of a script, you can add a child pipeline dynamically based on the script output. The method of using rules: with dynamic variables won't work because rules: are evaluated at the time the pipeline is created, as you found in the docs.
However, you can achieve the same effect using dynamic child-pipelines feature. The idea is you dynamically create the YAML for the desired pipeline in a job. That YAML created by your job will be used to create a child pipeline, which your pipeline can depend on.
Sadly, to add/remove a Gitlab job based on variables created from a previous job is not possible for a given pipeline
A way to achieve this is if your break your current pipeline to an upstream and downstream
The upstream will have 2 jobs
The first one will use your script to define a variable
This job will trigger the downstream, passing this variable
Upstream
check_val:
...
script:
... Script imposes the logic with the needed checks
... If true
- echo "MY_CONDITIONAL_VAR=true" >> var.env
... If false
- echo "MY_CONDITIONAL_VAR=false" >> var.env
artifacts:
reports:
dotenv: var.env
trigger_your_original_pipeline:
...
variables:
MY_CONDITIONAL_VAR: "$MY_CONDITIONAL_VAR"
trigger:
project: "project_namespance/project"
The downstream would be your original pipeline
Downstream
...
migration:
...
rules:
- if: '$MY_CONDITIONAL_VAR == "true"'
Now the MY_CONDITIONAL_VAR will be available at the start of the pipeline, so you can impose rules to add or not the migration job

Gitlab CI don't test if there are no changes to the code (pytest-testmon)

I want to perform tests using gitlab CI using pytest-testmon. Unfortunately it seems that the generated file .testmondata is not saved in the gitlab-runner instances, so that no recollection of what was tested is saved.
Is there a way of saving this .testmondata and using it (for every different branch at test) in order to avoid repeating tests of unchanged code when code is pushed to those branches?
You can upload files generated during your job to Gitlab so you can view them later by downloading them, viewing on a Merge Request, or using them in other jobs by using the artifacts keyword. Here's a simple example:
tests:
stage: tests
script:
- ./run_tests.sh
artifacts:
paths:
- .testmondata
name: "$CI_COMMIT_REF_NAME_tests_testmondata"
expose_as: 'Tests Results'
expire_in: 3 months
In this example, after your script section (and before_script or after_script if present) runs, gitlab-runner will look for a file/directory that matches the entries in the paths array. If none are found, it will throw an error.
The runner will then upload the artifacts to Gitlab using the name in the name field. It's good practice to use some of the predefined variables that Gitlab CI provides so you can easily identify which pipeline and job an artifact came from, if necessary. In this example the name has the branch or tag name, and then _tests_testmondata.
The next attribute, expose_as lets you put a link on a Merge Request for this branch to display the artifacts. Without expose_as, you'll have to open the Pipeline and Job to view the artifacts.
Next, expire_in lets you define when the artifacts from this job should expire. If you don't set this attribute, the artifacts will expire based on the Gitlab server's setting, which by default is 30 days. You can supply a number of seconds as an int (3600, or one hour), never (to never expire), or many other human-readable formats like 3 years 8 months 28 days or 46 months.
You can view all the available predefined variables here: https://docs.gitlab.com/ee/ci/variables/predefined_variables.html, and you can view all the options for the artifacts keyword here: https://docs.gitlab.com/ee/ci/yaml/#artifacts

How to run a specific Gitlab job from another Gitlab pipeline?

Is there any way to extend and run only a specific job from another pipeline in my current pipeline without copy-pasting it?
For example I have two pipelines:
1. build -> code_check -> auto_test -> deploy
2. auto_test* -> report
I want to execute pipeline 2 where auto_test* executes on another runner while keeping the job's keys exactly as they are in pipeline 1 (except for "tags" which I add in the job to be able to use another runner).
I have a process restriction that I can't change anything in pipeline 1 config so I need a way to execute only a specific job.
I have tried to do that through include .gitlab-ci.yaml+extends:. It somewhat works but pipeline 2 will have all jobs from both pipelines and it is not what I would like to see.
The most straightforward way would be just to copy on each update auto_test job specification from pipeline 1 into my gitlab-ci YAML of pipeline 2 and adding tags: ["MyRunner"] but I hoped there is a built-in way to do that.

Resources