I have to run a pipeline based on some conditional which I want to evaluate in .gitlab-ci.yml config. file. Basically, I want to create jobs based on if a condition is true. Below is my current .gitlab-ci.yml.
# This is a test to run multiple pipeline with sing .gitlab-ci.yml file.
# identifier stage will identify which pipeline (A or B) to run and only jobs
# of that pipeline would be executed and rest would be skipped.
# variables:
# PIPE_TYPE: "$(mkdir identifier; echo 'B' > identifier/type.txt; cat identifier/type.txt)"
# PIPE_TYPE: "B"
stages:
#- identify
- build
- test
#identify:
# stage: identify
# before_script:
# - mkdir "identifier"
# - echo "B" > identifier/type.txt
# script:
# - PIPE_TYPE=$(cat identifier/type.txt)
# - echo $PIPE_TYPE
# artifacts:
# paths:
# - identifier/type.txt
before_script:
# - mkdir "identifier"
# - echo "B" > identifier/type.txt
# - export PIPE_TYPE=$(cat identifier/type.txt)
- export PIPE_TYPE="B"
build_pipeline_A:
stage: build
only:
refs:
- master
variables:
- $PIPE_TYPE == "A"
script:
- echo $PIPE_TYPE
- echo "Building using A."
- mkdir "buildA"
- touch buildA/info.txt
artifacts:
paths:
- buildA/info.txt
build_pipeline_B:
stage: build
only:
refs:
- master
variables:
- $PIPE_TYPE == "B"
script:
- echo "Building using B."
- mkdir "buildB"
- touch buildB/info.txt
artifacts:
paths:
- buildB/info.txt
test_pipeline_A:
stage: test
script:
- echo "Testing A"
- test -f "buildA/info.txt"
only:
refs:
- master
variables:
- $PIPE_TYPE == "A"
dependencies:
- build_pipeline_A
test_pipeline_B:
stage: test
script:
- echo "Testing B"
- test -f "buildB/info.txt"
only:
refs:
- master
variables:
- $PIPE_TYPE == "B"
dependencies:
- build_pipeline_B
Here, I have two pipelines A with jobs build_pipeline_A and test_pipeline_A and second pipeline as B with build_pipeline_B and test_pipeline_B jobs.
First I thought I can create a job identify which would evaluate some logic and write which pipeline to be used in a file (identifier/type.txt) job and update PIPE_TYPE variable. This variable can be used in all the jobs under only:variables testing and would create the job if PIPE_TYPE is equal to job's pipeline type, unfortunately, this didn't work.
In second try, I thought of using global variables and try to evaluate the expression there and set it to PIPE_TYPE this didn't work either.
In my last try I used a before_script which would evaluate the expression and set it in PIPE_TYPE in hopes of on:variables will able to pick PIPE_TYPE value but no luck with this approach too.
I ran out of ideas at this point and decided to post the question.
my test's .gitlab-ci.yaml file, it's a public repo. so please feel free to poke around it.
Related
I have created a gitlab pipeline, in which I have created 8 stages. For each stage I have set the property **Allow_failure:true** so that it will execute remaining stages even if any stage got failed.
Currently, if any stage got failed, then the final pipeline status is showing as "! passed".
I want to execute all the stages of the pipeline and if any stage got failed then I want to display pipeline status as failed.
Note: I can't change the value of property Allow_failure.
Please find attached Image for your reference.
There's no configuration given in Gitlab for this. So we'll mostly have to handle this using scripting.
Idea:
We add a new job at the end which validates that all previous jobs were successful. if it sees any failure, it will fail.
How to check?: We leverage the files/artifacts to pass on that information.
All the stages till the end will be executed by Gitlab (either passed or failed)
Output:
Minimal Snippet:
jobA:
stage: A
allow_failure: true
script:
- echo "building..."
- echo "jobA" > ./completedA
artifacts:
paths:
- ./completedA
jobB:
stage: B
allow_failure: true
script:
- echo "testing..."
- exit 1
- echo "jobB" > ./completedB
artifacts:
paths:
- ./completedB
jobC:
stage: C
allow_failure: true
script:
- echo "deplying..."
- echo "jobC" > ./completedC
artifacts:
paths:
- ./completedC
validate:
stage: Validate
script:
- |
if [[ -f ./completedA && -f ./completedB && -f ./completedC ]]; then
echo "All stages were completed"
else
echo "Stages were not completed"
exit 1
fi
stages:
- A
- B
- C
- Validate
I need to run 2 jobs parallel on a different OS. For this scenario I've to active runners on different servers with the required OS. Each runner has a unique tag, which I use for the jobs. But the jobs are running sequential, not parallel. Is there any keyword which I have to use, to run both jobs parallel?
my gitlab-ci.yml
stages:
- test
rhel8:
stage: test
rules:
- if: $TEST == "rhel8" || $TEST == "all"
tags:
- rhel8
script:
- echo "Test RHEL 8"
rhel7:
stage: test
rules:
- if: $TEST == "rhel7" || $TEST == "all"
tags:
- rhel7
script:
- echo "Test RHEL 7"
The parallel keyword is what you are looking for. Here is an example of using it to run the same job with different runner tags:
stages:
- test
rhel:
stage: test
rules:
- if: $TEST =~ '/^rhel.*/' || $TEST == "all"
parallel:
matrix:
- TEST: rhel7
- TEST: rhel8
tags:
- $TEST
script:
- echo "Test $TEST"
I have a gitlab pipeline similar to the below one
stages
- test
p1::test:
stage: test
script:
- echo " parallel 1"
p2::test:
stage: test
script:
- echo " parallel 2"
p3::test:
stage: test
script:
- echo " parallel 3"
p4::test:
stage: test
script:
- echo " parallel 4"
All these four jobs will run in parallel, how can I get to know the status of the stage test,
I want to notify Success if all four are passed, Failed if anyone of the job fails.
One easy way to tell if the prior stage (and everything before it) has passed or failed is to add another stage with two jobs that use opposite when keywords.
If a job has when: on_success (the default) it will only run if all prior jobs have succeeded (or if they failed but have allow_failure: true, or have when: manual and haven't run yet). If any job has failed, it will not.
If a job has when: on_failure it will run if any of the prior jobs has failed.
This can be useful for cleaning up build artifacts, or rolling back changes, but it can apply for your use case too. For example, you could use two jobs like this:
stages:
- test
- verify_tests
p1::test:
stage: test
script:
- echo " parallel 1"
p2::test:
stage: test
script:
- echo " parallel 2"
p3::test:
stage: test
script:
- echo " parallel 3"
p4::test:
stage: test
script:
- echo " parallel 4"
tests_passed:
stage: verify_tests
when: on_success # this is the default, so you could leave this off. I'm adding it for clarity
script:
- # do whatever you need to when the tests all pass
tests_failed:
stage: verify_tests
when: on_failure # this will only run if a job in a prior stage fails
script:
- # do whatever you need to when a job fails
You can do this for each of your stages if you need to know the status after each one programmatically.
I have a little problem with my GitLab pipeline.
I would like to run a manual job with scheduled rule or find a way to run a scheduled pipe with my jobs without rewrite the pipe.
As you see in the example I have 2 firstjob tagged job. One is manually and one is scheduled. My problem that if I run the scheduled workflow, the AC-test won't start and if I try to run the FirstJob by scheduled rule, it won't start because of when: manual section.
Here is my example:
stages:
- firstjob
- test
- build
- deploy
FirstJob:
stage: firstjob
script:
- echo "Hello Peoples!"
- sleep 1
when: manual
allow_failure: false
FirstJobSchedule:
stage: firstjob
script:
- echo "Hello Scheduled Peoples!"
- sleep 1
only:
- schedule
allow_failure: false
AC-test:
needs: [FirstJob]
stage: test
script:
- echo "AC Test is running"
- sleep 10
ProdJobBuild:
stage: build
needs: [AC-test]
script:
- echo "Building thing to prod"
ProdJobDeploy:
stage: deploy
needs: [ProdJobBuild]
script:
- echo "Deploying thing to prod"
Is there a possibility to solve this problem somehow?
Did somebody ever suffer from this problem?
There's a way to do that with only:, but I'd suggest moving to rules: as only: is going to be deprecated.
So you will not need two jobs with different conditions, you can do a branching condition:
stages:
- firstjob
- test
- build
- deploy
workflow:
rules:
- if: $CI_MERGE_REQUEST_IID
- if: $CI_COMMIT_TAG
- if: $CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH
FirstJob:
stage: firstjob
script:
- echo "Hello Peoples!"
- sleep 1
rules:
- if: $CI_PIPELINE_SOURCE == "schedule"
# when: always # is a default value
- when: manual
# allow_failure: false # is a default value
AC-test:
needs: [FirstJob]
stage: test
script:
- echo "AC Test is running"
- sleep 10
ProdJobBuild:
stage: build
needs: [AC-test]
script:
- echo "Building thing to prod"
With it, the pipeline checks if the job is called by a schedule, and runs.
And if not, stays manual.
*I took the freedom to pick the MR-style of workflow to avoid the double pipelines.
I want to use the $CI_ENVIRONMENT_SLUG to point our Selenium tests to the right dynamic environment, but the variable is empty.
During the deployment stage it has a proper value and I don't get why the variable is not available in every stage. The echo cmd prints an empty line.
Tests:
image: maven:3.5.0-jdk-8
stage: Tests and static code checks
variables:
QA_PUBLISH_URL: http://$CI_ENVIRONMENT_SLUG-publish.test.com
script:
- echo $QA_PUBLISH_URL
- echo $CI_ENVIRONMENT_SLUG # empty
- mvn clean -Dmaven.repo.local=../../.m2/repository -B -s ../../settings.xml -P testrunner install -DExecutionID="FF_LARGE_WINDOWS10" -DRunMode="desktopLocal" -DSeleniumServerURL="https://$QA_ZALENIUM_USER:$QA_ZALENIUM_PASS#zalenium.test.com/wd/hub" -Dcucumber.options="--tags #sanity" -DJenkinsEnv="test.com" -DSeleniumSauce="No" -DBaseUrl=$QA_PUBLISH_URL
CI_ENVIRONMENT_SLUG is only available in the review JOB that has the environment set.
And currently (11.2) there is no way to move variables from one JOB to another, although you could:
echo -e -n "$CI_ENVIRONMENT_SLUG" > ci_environment_slug.txt
in the review JOB and add the file to the artifacts:
artifacts:
paths:
- ci_environment_slug.txt
and in your Tests job, use
before_script:
- export CI_ENVIRONMENT_SLUG=$(cat ci_environment_slug.txt)