Gitlab CI can trigger other project pipeline stage? - gitlab

I have an A project and an E2E project. I want to deploy A project trigger E2E pipeline run test but I just want the trigger test stage. we don't need trigger E2E to build deploy ...etc
e2e_tests:
stage: test
trigger:
project: project/E2E
branch: master
strategy: depend
stage: test
I have tried to use the stage in config. but got error unknown keys: stage
have any suggestions?

In your E2E project, the one that receives the trigger, you can tell a job to only run when the pipeline source is a trigger using the rules syntax:
build-from-trigger:
stage: build
when: never
rules:
- if: "$CI_COMMIT_REF_NAME == 'master' && $CI_PIPELINE_SOURCE == 'trigger'
when: always
script:
- ./build.sh # this is just an example, you'd run whatever you normally would here
The first when statement, when: never sets the default for the job. By default, this job will never run. Then using the rule syntax, we set a condition that will allow the job to run. If the CI_COMMIT_REF_NAME variable (the branch or tag name) is master AND the CI_PIPELINE_SOURCE variable (whatever kicked off this pipeline) is a trigger, then we run this job.
You can read about the when keyword here: https://docs.gitlab.com/ee/ci/yaml/#when, and you can read the rules documentation here: https://docs.gitlab.com/ee/ci/yaml/#rules

Related

Unexpected behaviour of "rules" in GitLab CI

I have some problems with understanding how and why "rules" in GitLab CI work.
I have written a minimal code showing the issue in a GitLab project: https://gitlab.com/daniel.grabowski/gitlab-ci-rules-problems It contains two directories ("files1/" and "files2/") with some files in them and a .gitlab-ci.yml file.
My configuration
Here's the CI configuration:
stages:
- build
workflow:
rules:
- if: $CI_PIPELINE_SOURCE == "push"
.job_tpl:
image: alpine:latest
stage: build
variables:
TARGET_BRANCH: $CI_DEFAULT_BRANCH
rules:
- if: $CI_COMMIT_BRANCH == $TARGET_BRANCH
changes:
- $FILES_DIR/**/*
variables:
JOB_ENV: "prod"
- if: $CI_COMMIT_BRANCH != $TARGET_BRANCH
changes:
- $FILES_DIR/**/*
when: manual
allow_failure: true
variables:
JOB_ENV: "dev"
script:
- echo "CI_COMMIT_BRANCH=$CI_COMMIT_BRANCH"
- echo "TARGET_BRANCH=$TARGET_BRANCH"
- echo "JOB_ENV=$JOB_ENV"
files1 job:
extends: .job_tpl
variables:
FILES_DIR: files1
files2 job:
extends: .job_tpl
variables:
FILES_DIR: files2
As you can see in the above code I'm using workflow to run only "branch pipelines" and have two "twin" jobs configured to watch for changes in one of the project's directories each. The TARGET_BRANCH variable is of course unnecessary in the demo project but i need something like this in the real one and it shows one of my problems. Additionally the jobs behave differently depending on the branch for which they are run.
My expectations
What I want to achieve is:
Each of the jobs should be added to a pipeline only when I push changes to files1/ or files2/ directory respectively.
When I push changes to a branch different then "main" a manual job responsible for the changed directory shoud be added to a pipeline.
When I merge changes to the "main" branch a job responsible for the changed directory shoud be added to a pipeline and it should be automatically started.
Test scenario
I'm creating a new branch from "main", make some change in the file1/test.txt and push the branch to GitLab.
what I expect: a pipeline created with only "files1 job" runnable manually
what I get: a pipeline with both jobs (both manual). Actually I've found explanation of such behaviour here: https://docs.gitlab.com/ee/ci/jobs/job_control.html#jobs-or-pipelines-run-unexpectedly-when-using-changes - "The changes rule always evaluates to true when pushing a new branch or a new tag to GitLab."
On the same branch I make another change in the file1/test.txt and make push.
what I expect: a pipeline created with only "files1 job" runnable manually
what I get: exactly what I expect since the branch isn't a "new" one
I create a Merge Request from my branch to main and make the merge.
what I expect: a pipeline created with only "files1 job" which starts automatically
what I get: a pipeline created with only "files1 job" but a manual one
My questions/problems
Can you suggest me any way to bypass the issue with "changes" evaluating always to "true" on new branches? Actually it behaves exactly as I want it if I don't use "rules" but let's assume I need "rules".
Why the jobs run as "manual" on the main branch in spite of the "if" condition in which both CI_COMMIT_BRANCH and TARGET_BRANCH variables are (or should be) set to "main". To debug it I'm printing those vars in job's "script" and when I run it on "main" pipeline I'm getting:
$ echo "CI_COMMIT_BRANCH=$CI_COMMIT_BRANCH"
CI_COMMIT_BRANCH=main
$ echo "TARGET_BRANCH=$TARGET_BRANCH"
TARGET_BRANCH=main
$ echo "JOB_ENV=$JOB_ENV"
JOB_ENV=dev
so theoretically CI should enter into the "automatic" job path.
Generally I find the CI "rules" quite inconvenient and confusing but as I understand it GitLab prefers them to "only/except" solution so I'm trying to refactor my CI/CD to use them which will fail if I don't find solution for the above difficulties :(

Apply GitLab CI/CD pipeline changes for pipeline run triggered by merge request

I have created a new CD/CD pipeline in GitLab via a .gitlab-ci.yml file in the repo root in a new project with a job structured like so:
...
test:
stage: test
script:
- pip install tox flake8
- tox -e py36,flake8
# Run only for merge requests on main branch
rules:
- if: '$CI_MERGE_REQUEST_SOURCE_BRANCH_NAME == "main"'
...
GitLab does not trigger the pipeline, saying there is no .gitlab-ci.yml file in the repository. I had assumed that pipeline changes would apply to the merge request run that was triggered. I can understand why this isn't the case for security purposes in a public repository, but I would like to test pipeline changes in the merge request that I created for my self-hosted private GitLab instance.
Is this possible?
This was a programming error. I needed to use:
- if: '$CI_MERGE_REQUEST_TARGET_BRANCH_NAME == "main"'
instead of:
- if: '$CI_MERGE_REQUEST_SOURCE_BRANCH_NAME == "main"'

Issue with pipelines on GitLab

I get an error when I'd like to run my pipeline on my dev branch.
My file .gitlab-ci.yml on my dev branch :**
stages:
- build
build:
stage: build
rules:
- if: '$CI_MERGE_REQUEST_SOURCE_BRANCH_NAME == /dev/'
when: manual
script:
- echo "Hello World !"
First of all, my pipeline is not executed when I create a merge request from dev to master. And the second issue, I get an error message when I try to execute them with the button "Run pipeline" :
Pipeline cannot be run.
No stages / jobs for this pipeline.
Your branch name is dev and not macthing the rule you defined. It's why it's Gitlab claims there are no stages/jobs for this pipeline when you execute the run manually.
Please edit the rule to match dev like this (double quotes instead of slashes) :
rules:
- if: '$CI_MERGE_REQUEST_SOURCE_BRANCH_NAME == "dev"'
when: manual

Preventing Deploy When Previous Stage Failed on GItlab CI

I have the following stages defined in my project's gitlab-config.yaml:
stages:
- Prepare
- Report
- Static Analysis
- Unit Test
- Integration Test Prep
- Integration Tests
- Deploy
The stage before Deploy is Integration Tests, and all jobs within this stage are not allowed to fail (which is the default according to the docs).
I have a number of deploy jobs that deploy to different environments. My production deploy job uses the following logic:
rules:
- if: $DEPLOY_ENV == "production" && $CI_COMMIT_BRANCH == "production"
when: always
My problem with the current setup is that even though the Integration Tests jobs are not allowed to fail, even if they do, the production deploy stage is still reached.
It appears that the use of always overrides the fact that the previous stage's jobs are not allowed to fail.
How can I prevent the production deploy job from running if any of the previous Integration Tests jobs fail?
The solution is to use on_success instead of always (docs):
rules:
- if: $DEPLOY_ENV == "production" && $CI_COMMIT_BRANCH == "production"
when: on_success
You can also use the needs keyword (https://docs.gitlab.com/ee/ci/yaml/index.html#needs) to specify more complex dependencies between jobs
jobA:
stage: build
script: echo "Building"
jobB:
stage: build
needs: ["jobA"]
script: echo "Another Build"

GitLab, manual job with dependency

My pipeline has 3 stages: Test, Prepare and Publish.
Test, is executed for any commit on any branch
Prepare, is executed on develop and master
Publish, reuse artifacts from Prepare and is executed on develop* and master
I have added on manual stage "Manual publish" to manually publish any successful commit from anything else than develop and master. However that stage require the artifacts from Prepare. I have used needs to run Prepare but that one is executed after Test either if we don't trigger the "Manual publish", this is a waste of time and resources.
Can we attach/import/merge an existing job in another one ?
I have tried to import the Prepare job in Manual publish but without success:
build-and-publish-manually:
<<: *prepare-docker
<<: *build-and-publish
except:
variables:
- $CI_COMMIT_REF_NAME == $DEVELOP_BRANCH
- $CI_COMMIT_REF_NAME == $MASTER_BRANCH
when: manual
Each job should be executed on different runner; prepare-artifact is executed inside a Docker runner while build-and-publish require a Shell runner.
The solution is to make the first job manual and the next one "needing" it.
I have added a Manual Prepare job triggered manually and the Publish job is configured to needs the Manual Prepare. So that he is executed only once the Manual Prepare is done.
# ...
prepare-docker-manually:
<<: *prepare-docker
when: manual
build-and-publish-manually:
<<: *build-and-publish
except:
variables:
- $CI_COMMIT_REF_NAME == $DEVELOP_BRANCH
- $CI_COMMIT_REF_NAME == $MASTER_BRANCH

Resources