Gitlab runner with arguments - gitlab

I use a Gitlab runner on my Win10 PC. The following yaml script creates a build environment in Python for VS2019. Everything works correct when I type "& "C:\Python2718\python.exe" ./test.py --product=TEST --vs=vs2019" in the PowerShell but GitLab fails.
The process finish with "ValueError: illegal environment variable name". Is something different with "=" in GitLab
build-job: # This job runs in the build stage, which runs first.
tags:
- TEST
stage: build
script:
- echo "Compiling the code..."
- '& "C:\Python2718\python.exe" ./test.py --product=TEST --vs=vs2019'
- echo "Compile complete."
artifacts:
when: always
paths:
- x64bld.log
- x86bld.log
rules:
- if: $CI_PIPELINE_SOURCE == "push"
when: manual
- if: $CI_PIPELINE_SOURCE == "merge_request_event"
when: never
- if: $CI_PIPELINE_SOURCE == "schedule"
when: always
allow_failure: false
- when: manual

Related

Gitlab.ci skip job if changes are only in specific files inside a relevant folder

I have this gitlab-ci.yml:
build-docker:
stage: build-docker
rules:
- if: '$CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH'
changes:
- app/Dockerfile
- app/requirements.txt
when: always
- when: manual
allow_failure: true
image:
name: alpine
entrypoint: [""]
script:
- echo 'Git Pulling, building and restarting'
deploy:
stage: deploy
rules:
- if: '$CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH'
changes:
- app/**/*
when: always
- when: manual
allow_failure: true
image:
name: alpine
entrypoint: [""]
script:
- echo 'Git Pulling and restarting'
My problem is that I doesn't need to run deploy if the changed files are only app/Dockerfile and/or app/requirements.txt (because the build job already ran and does the same as the deploy stage, and more), but I need it to run if changes happen on any other file inside app folder.
I already tried this in the deploy stage:
- if: '$CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH'
changes:
- "app/!(Dockerfile)"
- "app/!(requirements.txt)"
- app/**/*
when: always
- when: manual
allow_failure: true
But this doesn't work as expected.

How to dynamically set a gitlab executor via tags on the job?

Context
I am currently trying to implement a homolog/production environment with Gitlab, and have installations of Gitlab Runners on two different servers, both with the same configurations and dependencies, in order to run the build project.
I've written the .gitlab-ci.yml and tried on both runners, using the tag: runner_name and it worked just fine. The problem is that I cannot assign the executor when the pipeline runs.
A example of what I wrote:
job1:
stage: setup
variables:
RUNNER_TAG: "executor-default"
rules:
- if: $CI_MERGE_REQUEST_TARGET_BRANCH_NAME =~ /development/
variables:
RUNNER_TAG: "executor-default"
when: always
allow_failure: false
- if: $CI_MERGE_REQUEST_TARGET_BRANCH_NAME =~ /production/
variables:
RUNNER_TAG: "executor-production"
when: always
allow_failure: false
tags:
- $RUNNER_TAG
script:
- npm install
This job should run on the executor-default if the target is the development branch and on the executor-production if the target branch is the production. On every execution I've tried it ran on the executor-default. I have checked the documentation and forum, but no clue on how to implement or fix this behavior. How to dynamically set a gitlab executor via tags on the job?
You can use "tags" keyword with 2 values (executor-default & executor-production) and used "CI_RUNNER_TAGS" predefined variable. Try below code.
job1:
stage: setup
tags:
- "executor-default"
- "exector-production"
rules:
- if: $CI_MERGE_REQUEST_TARGET_BRANCH_NAME =~ /development/
- CI_RUNNER_TAGS == "executor-default"
when: always
allow_failure: false
- if: $CI_MERGE_REQUEST_TARGET_BRANCH_NAME =~ /production/
- CI_RUNNER_TAGS == "executor-production"
when: always
allow_failure: false

Exclude merge_request, push to create jobs in gitlab CI pipeline

workflow:
rules:
- if : '$CI_COMMIT_BRANCH == "Sprint-Release-Branch"'
when: never
- if : '$CI_PIPELINE_SOURCE == "merge_request_event" && $CI_PIPELINE_SOURCE == "push"'
when: never
- when: always
stages:
- Stage1
- Stage2
- Stage3
Task1:
stage: Stage1
script:
- echo "Stage1"
rules:
- if: '$CI_COMMIT_TAG =~ /^\d+\.\d+\.DEV\d+/'
tags:
- Runner
Task2:
stage: Stage1
script:
- echo "Checking code standard as per "Coding Standards""
rules:
- if: '$CI_COMMIT_TAG =~ /^\d+\.\d+\.DEV\d+/'
allow_failure: true
tags:
- Runner
Task3:
stage: Stage2
script:
- echo "Stage2"
when: manual
tags:
- Runner
Task4:
stage: Stage3
script:
- echo "Stage3"
when: manual
tags:
- Runner
Above is my Gitlab CI file, where i am trying
pipeline should not add jobs when there is merge & push requests happened on "Sprint-Release-Branch"
but whenever any merge request done "feature branches" onto "Sprint-Release-Branch" jobs which are defined as "when: manual" get added in pipeline.
So in my situation, Dev team is creating different feature branch for different user-stories, and then merging those features branches onto Sprint-Release-Branch having above yml file. So multiple jobs are getting added in pipeline continuously for every merge_request which are defined with "manual" trigger
How can i make optimized my yml so that jobs having manual trigger should not get added in pipeline.
which are defined as "when: manual" get added in pipeline.
You have to repeat the whole logic when you overwrite it.
Task3:
...
rules:
- if : '$CI_COMMIT_BRANCH == "Sprint-Release-Branch"'
when: never
- if : '$CI_PIPELINE_SOURCE == "merge_request_event" && $CI_PIPELINE_SOURCE == "push"'
when: never
- when: manual
Also, it's better to use when: on_success, not always.
Do something like the following with yaml anchors:
.myrules: &myrules
if: $CI_COMMIT_BRANCH == "Sprint-Release-Branch" || ($CI_PIPELINE_SOURCE == "merge_request_event" && $CI_PIPELINE_SOURCE == "push")
when: never
workflow:
rules:
- *myrules
- when: on_success
Task3:
...
rules:
- *myrules
- when: manual

Gitlab - Separating CI from Deployment

We are currently using Jenkins, and planning to migrate to Gitlab. We actually have 2 Jenkinsfiles in each repo, 1 is setup as a Multibranch pipeline and runs on all changes. Its is the merge check, that runs all the various linting, tests, building the docker containers etc. The second Jenkinsfile is only ran manually from Jenkins, it takes in all the various input parameters and it deploys the code. Which is mostly coming in from say, the linted Ansible/Terraform and selecting a docker image that would have already been built via the CI side of things.
I know gitlab doesnt support this model, but this project is already MVP'd so re-working how the dev's combined their logic and deployment code together is probably not going to happen.
Is it possibly, in 1 gitlab-ci.yml file to say run these jobs on merge/pushes and only run this on manual deployment .
e.g.
workflow:
rules:
- if: '$CI_PIPELINE_SOURCE == "merge_request_event"'
- if: '$CI_COMMIT_BRANCH && $CI_OPEN_MERGE_REQUESTS'
when: never
- if: '$CI_COMMIT_BRANCH'
stages:
- test
- test
- deploy
- destroy
test-python-job:
stage: test
rules:
- if: '$CI_PIPELINE_SOURCE == "push"'
- if: '$CI_PIPELINE_SOURCE == "merge_request_event"'
script:
- echo "Test Python"
- black
- bandit
- flake8
- tox
test-terraform-job:
stage: test
rules:
- if: '$CI_PIPELINE_SOURCE == "push"'
- if: '$CI_PIPELINE_SOURCE == "merge_request_event"'
script:
- echo "Test Terraform"
- terraform validate --yadda
test-ansible-job:
stage: test
rules:
- if: '$CI_PIPELINE_SOURCE == "push"'
- if: '$CI_PIPELINE_SOURCE == "merge_request_event"'
script:
- echo "Test Ansible"
- ansible-lint --yadda
deploy-job:
stage: deploy
variables:
DEPLOYMENT_ID: "Only deploy-job can use this variable's value"
secrets:
DATABASE_PASSWORD:
vault: production/db/password#ops
rules:
- when: manual
script:
- echo "Terraform Deploy"
- terraform deploy
- ansible-playbook yaddas
destroy-job:
stage: destroy
variables:
DEPLOYMENT_ID: "Only destroy-job can use this variable's value"
secrets:
DATABASE_PASSWORD:
vault: production/db/password#ops
rules:
- when: manual
script:
- terraform destroy
We have not even deployed gitlab yet, so im writing that off the top of my head, but I want to know what level of pain I am in for.
There are multiple options to achieve your goal with minimized configuration effort:
Working with private jobs and use inheritance or references for easier configuration - doable in one file
Extract parts into child pipelines for easier usage
reduced Configuration in one File
I assume you most hate that you have to redefine the rules for your jobs. There are two ways how you can reduce those duplication.
Inheritance
Can reduce a lot of duplication, but can also cause problems, with unintended side behaviour.
.test:
stage: test
rules:
- if: '$CI_PIPELINE_SOURCE == "push"'
- if: '$CI_PIPELINE_SOURCE == "merge_request_event"'
test-python-job:
extends: .test
script:
- echo "Test Python"
- black
- bandit
- flake8
- tox
test-terraform-job:
extends: .test
script:
- echo "Test Terraform"
- terraform validate --yadda
test-ansible-job:
extends: .test
script:
- echo "Test Ansible"
- ansible-lint --yadda
Composition
By using !reference you can combine certain aspects of jobs see https://docs.gitlab.com/ee/ci/yaml/#reference-tags
.test:
rules:
- if: '$CI_PIPELINE_SOURCE == "push"'
- if: '$CI_PIPELINE_SOURCE == "merge_request_event"'
test-python-job:
stage: test
rules:
- !reference [.test, rules]
script:
- echo "Test Python"
- black
- bandit
- flake8
- tox
test-terraform-job:
stage: test
rules:
- !reference [.test, rules]
script:
- echo "Test Terraform"
- terraform validate --yadda
stage: test
rules:
- !reference [.test, rules]
script:
- echo "Test Ansible"
- ansible-lint --yadda
Parent Child pipelines
Sometimes it might be also suitable to extract functionality into child pipelines. You can easier control what is happening at one stage, when you call the child pipeline and gain overview due to fewer lines of codes. It adds complexity to your builds. But generally it will generate a cleaner and easier to maintain ci structure (my opinion).
This approach will only add the child pipeline when needed - furthermore, you could also centralize this file, if it is similar for the deployments
.gitlab-ci.yml
deploy:
stage: deploy
trigger:
include:
- local: Deploy.gitlab-ci.yml
strategy: depend
rules:
- if: $CI_PIPELINE_SOURCE == 'web' #maybe also useful, as it will only happen on a web interaction
when: manual
- if: $CI_PIPELINE_SOURCE == 'schedule' #maybe also useful, for schedules
Deploy.gitlab-ci.yml
deploy-job:
stage: deploy
variables:
DEPLOYMENT_ID: "Only deploy-job can use this variable's value"
secrets:
DATABASE_PASSWORD:
vault: production/db/password#ops
script:
- echo "Terraform Deploy"
- terraform deploy
- ansible-playbook yaddas
destroy-job:
stage: destroy
variables:
DEPLOYMENT_ID: "Only destroy-job can use this variable's value"
secrets:
DATABASE_PASSWORD:
vault: production/db/password#ops
script:
- terraform destroy
Sidenotes
This is not 100% answering your question, but it shows you a lot of flexibility, and you will soon realize that mimicking jenkins is not 100% ideal. Eg having deployment jobs directly attached to a commit, and visible on that one, allows for a better overview of what was exactly deployed. If you need to run such things manually, i highly recommend to use schedules with preconfigured values, as they only have a play button. Also you have the artifacts already in place and build from your pipeline, why not add additional steps utilizing them, instead of providing this information
I hope my insights will be useful to you, and happy migration ;)

GitLab pipeline (.gitlab-ci.yml) for CI and scheduled SAST

We would like to have a .gitlab-ci.yml which supports the default CI pipeline and the SAST pipeline only scheduled once a day.
lint, build, test-unit (on merge request)
test-sast (scheduled once a day)
What seems logic but didn't work is this configuration:
include:
- template: Security/SAST.gitlab-ci.yml
- template: Workflows/MergeRequest-Pipelines.gitlab-ci.yml
image: node:lts-alpine
stages:
- lint
- build
- test
lint:
stage: lint
script:
- npm i
- npm run lint
build:
stage: build
script:
- npm i
- npm run build
test-unit:
stage: test
script:
- npm i
- npm run test:unit
test-sast:
stage: test
script: [ "true" ]
rules:
- if: $CI_PIPELINE_SOURCE == "schedule"
when: always
- when: never
Then did some tests using the environment variable SAST_DISABLED which didn't work as well.
May be someone has a similiar setup and may help out with a working sample?
Your workflow:rules do not have an explicit allow for $CI_PIPELINE_SOURCE == "schedule"
This is what I use for merge request pipelines:
workflow:
rules:
# Do not start pipeline for WIP/Draft commits
- if: $CI_COMMIT_TITLE =~ /^(WIP|Draft)/i
when: never
# MergeRequest-Pipelines workflow
# For merge requests create a pipeline.
- if: $CI_MERGE_REQUEST_IID || $CI_PIPELINE_SOURCE == "merge_request_event"
# For tags, create a pipeline.
- if: $CI_COMMIT_TAG
# For default branch create a pipeline (this includes on schedules, pushes, merges, etc.).
- if: $CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH
# For other pipeline triggers
- if: $CI_PIPELINE_SOURCE =~ /^trigger|pipeline|web|api$/

Resources