Gitlab CI $CI_ENVIRONMENT_SLUG is empty - scope

I want to use the $CI_ENVIRONMENT_SLUG to point our Selenium tests to the right dynamic environment, but the variable is empty.
During the deployment stage it has a proper value and I don't get why the variable is not available in every stage. The echo cmd prints an empty line.
Tests:
image: maven:3.5.0-jdk-8
stage: Tests and static code checks
variables:
QA_PUBLISH_URL: http://$CI_ENVIRONMENT_SLUG-publish.test.com
script:
- echo $QA_PUBLISH_URL
- echo $CI_ENVIRONMENT_SLUG # empty
- mvn clean -Dmaven.repo.local=../../.m2/repository -B -s ../../settings.xml -P testrunner install -DExecutionID="FF_LARGE_WINDOWS10" -DRunMode="desktopLocal" -DSeleniumServerURL="https://$QA_ZALENIUM_USER:$QA_ZALENIUM_PASS#zalenium.test.com/wd/hub" -Dcucumber.options="--tags #sanity" -DJenkinsEnv="test.com" -DSeleniumSauce="No" -DBaseUrl=$QA_PUBLISH_URL

CI_ENVIRONMENT_SLUG is only available in the review JOB that has the environment set.
And currently (11.2) there is no way to move variables from one JOB to another, although you could:
echo -e -n "$CI_ENVIRONMENT_SLUG" > ci_environment_slug.txt
in the review JOB and add the file to the artifacts:
artifacts:
paths:
- ci_environment_slug.txt
and in your Tests job, use
before_script:
- export CI_ENVIRONMENT_SLUG=$(cat ci_environment_slug.txt)

Related

Gitlab: execute stage conditionally depends on a new file generated on previous stage or not

image: python:3.10
stages:
- build
- notify
build:
stage: build
script:
- echo "run build"
- line=$(cat faulty_links.txt | wc -l |xargs)
- echo $line
- if [ ${line} -gt "0" ]; then echo $line > found_fault_links; fi
- ls -l
artifacts:
untracked: true
notify:
stage: notify
dependencies:
- build
script:
- echo "notify"
rules:
- exists:
- found_fault_links
So ideally, in stage build, when detect there are contents in faulty_links.txt, I create a new file called found_fault_links
I used command ls -l, I can see the file found_fault_links created.
In stage notify, if file found_fault_links is exist, It will run the notify stage. If not, ignore this stage.
But when I run this pipeline, only stage build is run, never run the second stage.
How to fix this issue?
I don't have to follow this way, maybe there are better ways, please recommend as well

GitLab CI/CD Pipeline - How to output the console logs of a job (Maven) as an artifact to download

I am currently creating a CI/CD pipeline in GitLab and have some jobs that run maven commands e.g.
maven test-compile:
stage: test
script:
- mvn clean test-compile
These are simple console commands but I want to output the logs created by the runner to a file that can be downloaded as an artifact WHILST also keeping the logs in the console whilst the pipeline is running.
I attempted the following which output the logs to a file but by directing them to a file the logs were not shown and I had to tail the logs to circumvent this:
script:
- mvn clean test-compile > log.txt
- tail -f ./log.txt
Is there a simpler way to get around this?
Many thanks
script:
- mvn clean test-compile > log.txt
- tail -f ./log.txt
Hi #CsNova and welcone on SO
The frst thing is to pipe your output and use tee. This will print the build on stdout AND into the file
mvn clean test-compile | tee log.txt
Then you can add a step in you pipeline to save this artifact (https://apps.risksciences.ucla.edu/gitlab/help/ci/pipelines/job_artifacts.md#defining-artifacts-in-gitlab-ciyml)
pdf:
script: xelatex mycv.tex
artifacts:
paths:
- mycv.pdf
expire_in: 1 week

How to get the status of a stage on gitlab?

I have a gitlab pipeline similar to the below one
stages
- test
p1::test:
stage: test
script:
- echo " parallel 1"
p2::test:
stage: test
script:
- echo " parallel 2"
p3::test:
stage: test
script:
- echo " parallel 3"
p4::test:
stage: test
script:
- echo " parallel 4"
All these four jobs will run in parallel, how can I get to know the status of the stage test,
I want to notify Success if all four are passed, Failed if anyone of the job fails.
One easy way to tell if the prior stage (and everything before it) has passed or failed is to add another stage with two jobs that use opposite when keywords.
If a job has when: on_success (the default) it will only run if all prior jobs have succeeded (or if they failed but have allow_failure: true, or have when: manual and haven't run yet). If any job has failed, it will not.
If a job has when: on_failure it will run if any of the prior jobs has failed.
This can be useful for cleaning up build artifacts, or rolling back changes, but it can apply for your use case too. For example, you could use two jobs like this:
stages:
- test
- verify_tests
p1::test:
stage: test
script:
- echo " parallel 1"
p2::test:
stage: test
script:
- echo " parallel 2"
p3::test:
stage: test
script:
- echo " parallel 3"
p4::test:
stage: test
script:
- echo " parallel 4"
tests_passed:
stage: verify_tests
when: on_success # this is the default, so you could leave this off. I'm adding it for clarity
script:
- # do whatever you need to when the tests all pass
tests_failed:
stage: verify_tests
when: on_failure # this will only run if a job in a prior stage fails
script:
- # do whatever you need to when a job fails
You can do this for each of your stages if you need to know the status after each one programmatically.

How to fix "No such file or directory" during gitlab-ci run

I have just configured the gitlab-ci with a runner, and run the template bash ci tasks as:
# This file is a template, and might need editing before it works on your project.
# see https://docs.gitlab.com/ce/ci/yaml/README.html for all available options
# you can delete this line if you're not using Docker
#image: busybox:latest
before_script:
- echo "Before script section"
- echo "For example you might run an update here or install a build dependency"
- echo "Or perhaps you might print out some debugging details"
after_script:
- echo "After script section"
- echo "For example you might do some cleanup here"
build1:
stage: build
script:
- echo "Do your build here"
test1:
stage: test
script:
- echo "Do a test here"
- echo "For example run a test suite"
test2:
stage: test
script:
- echo "Do another parallel test here"
- echo "For example run a lint test"
deploy1:
stage: deploy
script:
- echo "Do your deploy here"
But the job has failed:
And I logged into the runner machine and found there is only a ${projectname}.tmp folder under the desired location
did I miss something?
finally found this was a bug with gitlab-runner and Debian/Buster, comment out the .bash_logout file at /home/gitlab-runner will fix this
Here is the discussion of this issue in case of helping other who meet with the same issue
I had a similar issue, but my issue was caused by cdebootstrap
To fix the issue I used:
stable ./debian-minbase http://deb.debian.org/debian/
Hope this answer is useful.
You can also use: sudo find / -name "mk-prebuilt-images.sh"
It will end up finding:
/usr/lib/gitlab-runner/mk-prebuilt-images.sh

Run a gitlab pipeline based on a condition

I have to run a pipeline based on some conditional which I want to evaluate in .gitlab-ci.yml config. file. Basically, I want to create jobs based on if a condition is true. Below is my current .gitlab-ci.yml.
# This is a test to run multiple pipeline with sing .gitlab-ci.yml file.
# identifier stage will identify which pipeline (A or B) to run and only jobs
# of that pipeline would be executed and rest would be skipped.
# variables:
# PIPE_TYPE: "$(mkdir identifier; echo 'B' > identifier/type.txt; cat identifier/type.txt)"
# PIPE_TYPE: "B"
stages:
#- identify
- build
- test
#identify:
# stage: identify
# before_script:
# - mkdir "identifier"
# - echo "B" > identifier/type.txt
# script:
# - PIPE_TYPE=$(cat identifier/type.txt)
# - echo $PIPE_TYPE
# artifacts:
# paths:
# - identifier/type.txt
before_script:
# - mkdir "identifier"
# - echo "B" > identifier/type.txt
# - export PIPE_TYPE=$(cat identifier/type.txt)
- export PIPE_TYPE="B"
build_pipeline_A:
stage: build
only:
refs:
- master
variables:
- $PIPE_TYPE == "A"
script:
- echo $PIPE_TYPE
- echo "Building using A."
- mkdir "buildA"
- touch buildA/info.txt
artifacts:
paths:
- buildA/info.txt
build_pipeline_B:
stage: build
only:
refs:
- master
variables:
- $PIPE_TYPE == "B"
script:
- echo "Building using B."
- mkdir "buildB"
- touch buildB/info.txt
artifacts:
paths:
- buildB/info.txt
test_pipeline_A:
stage: test
script:
- echo "Testing A"
- test -f "buildA/info.txt"
only:
refs:
- master
variables:
- $PIPE_TYPE == "A"
dependencies:
- build_pipeline_A
test_pipeline_B:
stage: test
script:
- echo "Testing B"
- test -f "buildB/info.txt"
only:
refs:
- master
variables:
- $PIPE_TYPE == "B"
dependencies:
- build_pipeline_B
Here, I have two pipelines A with jobs build_pipeline_A and test_pipeline_A and second pipeline as B with build_pipeline_B and test_pipeline_B jobs.
First I thought I can create a job identify which would evaluate some logic and write which pipeline to be used in a file (identifier/type.txt) job and update PIPE_TYPE variable. This variable can be used in all the jobs under only:variables testing and would create the job if PIPE_TYPE is equal to job's pipeline type, unfortunately, this didn't work.
In second try, I thought of using global variables and try to evaluate the expression there and set it to PIPE_TYPE this didn't work either.
In my last try I used a before_script which would evaluate the expression and set it in PIPE_TYPE in hopes of on:variables will able to pick PIPE_TYPE value but no luck with this approach too.
I ran out of ideas at this point and decided to post the question.
my test's .gitlab-ci.yaml file, it's a public repo. so please feel free to poke around it.

Resources