Gitlab send current date in output - gitlab

I have a gitlab-ci.yml file. After each step, I'd like to send an output via REST containing the current date. Just sending an output via REST works but I have difficulties passing in the currentdate. I'm currently solving it like below (by exporting a variable)
image:
name: hashicorp/terraform:light
entrypoint:
- '/usr/bin/env'
- 'PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin'
before_script:
- apk add curl
- export mydate = $(date -I)
stages:
- validate
- plan
- apply
validate:
stage: validate
script:
- terraform validate
- <curl request>
variables:
msg: "$mydate => Validation complete, moving on"
plan:
stage: plan
script:
- terraform plan -out "planfile"
variables:
msg: "$mydate => Planning complete, moving on"
dependencies:
- validate
$ export mydate = $(date -I) /bin/sh: export: line 97: : bad variable name
Whatever variable name I choose, I always get this error message

That's because you have a space in your variable name.
Instead of writing export mydate = $(date -I), write export mydate=$(date -I).

Related

gitlab ci/cd conditional 'when: manual'?

Is it possible to for a gitlab ci/cd job to be triggered manually only under certain conditions that are evaluated based on the output of jobs earlier in the pipeline? I would like my 'terraform apply' job to run automatically if my infrastructure hasn't changed, or ideally to be skipped entirely, but to be triggered manually if it has.
My .tf file is below. I'm using OPA to set an environment variable to true or false when my infrastructure changes but as far as I can tell, I can only include or exclude jobs when the pipeline is set up, based on e.g. git branch information, not at pipeline run time.
Thanks!
default:
image:
name: hashicorp/terraform:light
entrypoint:
- '/usr/bin/env'
before_script:
- echo ${AWS_PROFILE}
- echo ${TF_ROOT}
plan:
script:
- cd ${TF_ROOT}
- terraform init
- terraform plan -var "profile=${AWS_PROFILE}" -out tfplan.binary
- terraform show -json tfplan.binary > tfplan.json
artifacts:
paths:
- ${TF_ROOT}/.terraform
- ${TF_ROOT}/.terraform.lock.hcl
- ${TF_ROOT}/tfplan.binary
- ${TF_ROOT}/tfplan.json
validate:
image:
name: openpolicyagent/opa:latest-debug
entrypoint: [""]
script:
- cd ${TF_ROOT}
- /opa eval --format pretty --data ../../policy/terraform.rego --input tfplan.json "data.policy.denied"
- AUTHORISED=`/opa eval --format raw --data ../../policy/terraform.rego --input tfplan.json "data.policy.authorised"`
- echo INFRASTRUCTURE_CHANGED=`/opa eval --format raw --data ../../policy/terraform_infrastructure_changed.rego --input tfplan.json "data.policy.changed"` >> validate.env
- cat validate.env
- if [ $AUTHORISED == 'false' ]; then exit 1; else exit 0; fi
artifacts:
paths:
- ${TF_ROOT}/.terraform
- ${TF_ROOT}/.terraform.lock.hcl
- ${TF_ROOT}/tfplan.binary
reports:
dotenv: ${TF_ROOT}/validate.env
needs: ["plan"]
apply:
script:
- echo ${INFRASTRUCTURE_CHANGED}
- cd ${TF_ROOT}
- terraform apply tfplan.binary
artifacts:
paths:
- ${TF_ROOT}/.terraform
- ${TF_ROOT}/.terraform.lock.hcl
- ${TF_ROOT}/tfplan.binary
needs:
- job: validate
artifacts: true
when: manual
rules:
- allow_failure: false

Command not found on gitlab ci using cat

I have a Gitlab job in which I get a value from a .txt file. This value (v100322.1) was written into the text file in a previous stage and passing by to the job through artifacts.
When I try to get value from the file with cat command I get this error on the pipeline:
$ $PACKAGE_VERSION=$(cat build.txt)
+++ cat build.txt
++ $'=\377\376v100322.1\r'
bash: line 132: $'=\377\376v100322.1\r': command not found
And this is my YAML file for GitLab-CI:
stages:
- deploy
- trigger
.deploy_job_base:
stage: deploy
tags:
- dotnet
script:
- $PACKAGE_VERSION="v100322.1"
- ${PACKAGE_VERSION} > build.txt
artifacts:
expire_in: 1 week
paths:
- build.txt
allow_failure: false
deploy_job_sport:
extends: .deploy_job_base
deploy_job_TestClient:
extends: .deploy_job_base
# trigger GitLab API call
.trigger_base:
stage: trigger
script:
- $PACKAGE_VERSION=$(cat build.txt)
- 'curl --include --fail --request POST --form "token=$CI_JOB_TOKEN" --form "PACKAGE_VERSION=$PACKAGE_VERSION" --form "ref=feature/1000" $GITLAB_BASE_URL/api/v4/projects/$APP_PROJECT_ID/trigger/pipeline'
trigger_sport:
extends: .trigger_base
variables:
APP_PROJECT_ID: "2096"
needs: [deploy_job_sport]
dependencies:
- deploy_job_sport
trigger_TestClient:
extends: .trigger_base
variables:
APP_PROJECT_ID: "2110"
needs: [deploy_job_TestClient]
dependencies:
- deploy_job_TestClient
Do you know which is the problem here?
Thanks in advance.
The cause is the syntax, you can always check it on a virtual box, or better, pull the docker image down that the job will be executed on, and test the script in there
(using docker run -it ${JOB_DOCKER_IMAGE} /bin/bash for instance).
I just tested your script and got this:
you can clearly see bash will not like the $ in front of PACKAGE_VERSION and interpret it, like a command ...
But you can turn the script of .deploy_job_base into a one-liner like this:
you circumnavigate the need for defining a variable and just dump it in the build.txt file.

Gitlab CI: Passing dynamic variables

I am looking to pass varibales value dynamically as shown below to terraform image as mentioned in the link
image:
name: hashicorp/terraform:light
entrypoint:
- /usr/bin/env
- 'PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin'
- 'ACCESS_KEY_ID=${ENV}_AWS_ACCESS_KEY_ID'
- 'SECRET_ACCESS_KEY=${ENV}_AWS_SECRET_ACCESS_KEY'
- 'DEFAULT_REGION=${ENV}_AWS_DEFAULT_REGION'
- 'export AWS_ACCESS_KEY_ID=${!ACCESS_KEY_ID}'
- 'export AWS_SECRET_ACCESS_KEY=${!SECRET_ACCESS_KEY}'
- 'export AWS_DEFAULT_REGION=${!DEFAULT_REGION}'
However, I am getting empty values. How can I pass dynamic values to the variables.
The confusion arises from the subtle fact, that the gitlab runner executes the commands passed into the script section using sh rather than bash
And the core issue is encountered that the following syntax
'export AWS_ACCESS_KEY_ID=${!ACCESS_KEY_ID}'
is understood correctly only by bash and not by sh
Therefore, we need to workaround it by using syntax that is understood by sh
For your case, something like the following should do it
image:
name: hashicorp/terraform:light
entrypoint:
- /usr/bin/env
- 'PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin'
job:
before_script:
- ACCESS_KEY_ID=${ENV}_AWS_ACCESS_KEY_ID
- export AWS_ACCESS_KEY_ID=$(eval echo \$$ACCESS_KEY_ID )
- SECRET_ACCESS_KEY=${ENV}_AWS_SECRET_ACCESS_KEY
- export AWS_SECRET_ACCESS_KEY=$( eval echo \$$SECRET_ACCESS_KEY )
- DEFAULT_REGION=${ENV}_AWS_DEFAULT_REGION
- export AWS_DEFAULT_REGION=$( eval echo \$$DEFAULT_REGION )
script:
- echo $AWS_ACCESS_KEY_ID
- echo $AWS_SECRET_ACCESS_KEY
- echo $AWS_DEFAULT_REGION

Is there a way to dynamically choose whether a job is run in a Gitlab CI pipeline?

I am trying to have one job check for a word being present in a config file and have that determine whether a subsequent trigger job occurs or not...
Like so...
stages:
- check
- trigger_pipeline
variables:
- TRIGGER_JOB : "0"
- CONFIG_FILE : "default.json"
check:
stage: check
script:
- |
if grep -q keyword "$CONFIG_FILE"; then
TRIGGER_JOB="1"
fi
- echo "TRIGGER_JOB=${TRIGGER_JOB}" >> variables.env
artifacts:
reports:
dotenv: "variables.env"
trigger_pipeline:
stage: trigger_pipeline
rules:
- if: '$TRIGGER_JOB == "1"'
trigger:
project: downstream/project
branch: staging
strategy: depend
needs: ["check"]
It seems like I've reached a limitation with GitLab because the trigger_pipeline job doesn't even get created due to the fact that the pipeline initializes with TRIGGER_JOB: "0" so it doesn't matter that I'm doing this check to trigger the pipeline later if the keyword was found.
Is there any way to dynamically decide if this trigger_pipeline job would be created or not?
I would just put it all in one job and trigger the downstream pipeline through the API, but then of course, I can't depend on the downstream status which is something I want as well (and isn't possible to do when triggering through the API from everything I've found in the docs).
Any advice would be appreciated. Thanks!
The closest thing to what you're describing is dynamic child pipelines. That would allow you to create a pipeline configuration dynamically in one job, then run it.
generate-config:
stage: build
script: generate-ci-config > generated-config.yml
artifacts:
paths:
- generated-config.yml
child-pipeline:
stage: test
trigger:
include:
- artifact: generated-config.yml
job: generate-config
There is a way to dynamically choose whether or not to execute a gitlab job
In your case apply the following config
stages:
- trigger_pipeline
- check
variables:
- CONFIG_FILE : "default.json"
check:
stage: check
script:
- |
if grep -q keyword "$CONFIG_FILE"; then
curl -s --header "PRIVATE-TOKEN: <your_access_token>" "https://gitlab.com/api/v4/projects/$CI_PROJECT_ID/pipelines/$CI_PIPELINE_ID/jobs" | jq '.[]'
JOB_ID=$(curl -s --header "PRIVATE-TOKEN: <your_access_token>" "https://gitlab.com/api/v4/projects/$CI_PROJECT_ID/pipelines/$CI_PIPELINE_ID/jobs" | jq '.[] | select(.name=="child-pipeline") | .id')
curl --request POST --header "PRIVATE-TOKEN: <your_access_token>" "https://gitlab.com/api/v4/projects/$CI_PROJECT_ID/jobs/$JOB_ID/play"
fi
trigger_pipeline:
stage: trigger_pipeline
trigger:
project: downstream/project
branch: staging
strategy: depend
when: manual
The logic behind is that you configure your target job, in your case trigger_pipeline as manual
Change the stage ordering so that your target job is set up first
Then if your logic evaluates to true in your case
grep -q keyword "$CONFIG_FILE"
Then the check job basically executes the target job. By firstly identifying the target job id and finally executing by calling the play endpoint of the Gitlab API

How to be able to pass variable to rules in gitlab ci pipeline?

I want to use rules in my GitLab CI pipeline to be able to check if commit is commited from desired branch and if I have any fixable issues in image that I pushed on Harbor registry.
I push that image to registry and do scan of that image on Harbor registry, then get those results in previous stages and now I want to be able to check if I have any fixable issues in that image, if I have I would like to create that job to be manual but to leave the possibility to continue with execution of pipeline and other stages that come after this. If I don't find any of those issues ( I don't have it in my APIs output form Harbor ) I just set that variable to 0 and I wat to continue with execution of pipeline normaly. That varibale for fixable issues in pipeline is called FIXABLE, I tried many ways to assign value to this varibale so rules can be able to read value of that varibale but non of these worked. I will post mu latest work down below so that anyone, who has an idea or advice can look at this. Any help would mean a lot to me. I know that rules are created immediately after the pipeline itself is created so at this moment I am not really sure how can I deal with this.
Thanks in advance!
I have added value of 60 to varibale FINAL_FIXABLE to check if job would run manualy.
Issue is that only this job procession results (dev branch, case one) is running even though FINAL_FIXABLE is set to 60.
After I do build and push of image, those are the stages in pipeline related to this problem:
get results (dev branch):
stage: Results of scanning image
image: alpine
variables:
RESULTS: ""
STATUS: ""
SEVERITY: ""
FIXABLE: ""
before_script:
- apk update && apk upgrade
- apk --no-cache add curl
- apk add jq
- chmod +x ./scan-script.sh
script:
- 'RESULTS=$(curl -H "Authorization: Basic `echo -n ${HARBOR_USER}:${HARBOR_PASSWORD} | base64`" -X GET "https://myregistry/projects/myproject/artifacts/latest?page=1&page_size=10&with_tag=true&with_label=true&with_scan_overview=true&with_signature=true&with_immutable_status=true")'
- STATUS=$(./scan-script.sh "STATUS" "$RESULTS")
- SEVERITY=$(./scan-script.sh "SEVERITY" "$RESULTS")
- FIXABLE=$(./scan-script.sh "FIXABLE" "$RESULTS")
# - echo "$FIXABLE">fixableValue.txt
- echo "Printing the results of the image scanning process on Harbor registry:"
- echo "status of scan:$STATUS"
- echo "severity of scan:$SEVERITY"
- echo "number of fixable issues:$FIXABLE"
- echo "For more information of scan results please visit Harbor registry!"
- FINAL_FIXABLE=$FIXABLE
- echo $FINAL_FIXABLE
- FINAL_FIXABLE="60"
- echo $FINAL_FIXABLE
- echo "$FINAL_FIXABLE">fixableValue.txt
only:
refs:
- dev
- some-test-branch
artifacts:
paths:
- fixableValue.txt
get results (other branches):
stage: Results of scanning image
dependencies:
- prep for build (other branches)
image: alpine
variables:
RESULTS: ""
STATUS: ""
SEVERITY: ""
FIXABLE: ""
before_script:
- apk update && apk upgrade
- apk --no-cache add curl
- apk add jq
- chmod +x ./scan-script.sh
script:
- LATEST_TAG=$(cat tags.txt)
- echo "Latest tag is $LATEST_TAG"
- 'RESULTS=$(curl -H "Authorization: Basic `echo -n ${HARBOR_USER}:${HARBOR_PASSWORD} | base64`" -X GET "https://myregistry/myprojects/artifacts/"${LATEST_TAG}"?page=1&page_size=10&with_tag=true&with_label=true&with_scan_overview=true&with_signature=true&with_immutable_status=true")'
- STATUS=$(./scan-script.sh "STATUS" "$RESULTS")
- SEVERITY=$(./scan-script.sh "SEVERITY" "$RESULTS")
- FIXABLE=$(./scan-script.sh "FIXABLE" "$RESULTS")
# - echo "$FIXABLE">fixableValue.txt
- echo "Printing the results of the image scanning process on Harbor registry:"
- echo "status of scan:$STATUS"
- echo "severity of scan:$SEVERITY"
- echo "number of fixable issues:$FIXABLE"
- echo "For more information of scan results please visit Harbor registry!"
- FINAL_FIXABLE=$FIXABLE
- echo $FINAL_FIXABLE
- FINAL_FIXABLE="60"
- echo $FINAL_FIXABLE
- echo "$FINAL_FIXABLE">fixableValue.txt
only:
refs:
- master
- /^(([0-9]+)\.)?([0-9]+)\.x/
- rc
artifacts:
paths:
- fixableValue.txt
procession results (dev branch, case one):
stage: Scan results processing
dependencies:
- get results (dev branch)
image: alpine
script:
- FINAL_FIXABLE=$(cat fixableValue.txt)
- echo $CI_COMMIT_BRANCH
- echo $FINAL_FIXABLE
rules:
- if: ($CI_COMMIT_BRANCH == "dev" || $CI_COMMIT_BRANCH == "some-test-branch") && ($FINAL_FIXABLE=="0")
when: always
procession results (dev branch, case two):
stage: Scan results processing
dependencies:
- get results (dev branch)
image: alpine
script:
- FINAL_FIXABLE=$(cat fixableValue.txt)
- echo $CI_COMMIT_BRANCH
- echo $FINAL_FIXABLE
rules:
- if: ($CI_COMMIT_BRANCH == "dev" || $CI_COMMIT_BRANCH == "some-test-branch") && ($FINAL_FIXABLE!="0")
when: manual
allow_failure: true
procession results (other branch, case one):
stage: Scan results processing
dependencies:
- get results (other branches)
image: alpine
script:
- FINAL_FIXABLE=$(cat fixableValue.txt)
- echo $CI_COMMIT_BRANCH
- echo $FINAL_FIXABLE
rules:
- if: ($CI_COMMIT_BRANCH == "master" || $CI_COMMIT_BRANCH == "rc" || $CI_COMMIT_BRANCH =~ "/^(([0-9]+)\.)?([0-9]+)\.x/") && ($FINAL_FIXABLE=="0")
when: always
procession results (other branch, case two):
stage: Scan results processing
dependencies:
- get results (other branches)
image: alpine
script:
- FINAL_FIXABLE=$(cat fixableValue.txt)
- echo $CI_COMMIT_BRANCH
- echo $FINAL_FIXABLE
rules:
- if: ($CI_COMMIT_BRANCH == "master" || $CI_COMMIT_BRANCH == "rc" || $CI_COMMIT_BRANCH =~ "/^(([0-9]+)\.)?([0-9]+)\.x/") && ($FINAL_FIXABLE!="0")
when: manual
allow_failure: true
You cannot use these methods for controlling whether jobs run with rules: because rules are evaluated at pipeline creation time and cannot be changed once the pipeline is created.
Your best option to dynamically control pipeline configuration like this would probably be dynamic child pipelines.
As a side note, to set environment variables for subsequent jobs, you can use artifacts:reports:dotenv. When this special artifact is passed to subsequent stages/jobs, the variables in the dotenv file will be available in the job, as if it were set in environment:
stages:
- one
- two
first:
stage: one
script: # create dotenv file with variables to pass
- echo "VAR_NAME=foo" >> "myvariables.env"
artifacts:
reports: # create report to pass variables to subsequent jobs
dotenv: "myvariables.env"
second:
stage: two
script: # variables from dotenv artifact will be in environment automatically
- echo "${VAR_NAME}" # foo
You are doing basically the same thing with your .txt artifact, which works effectively the same way, but this works with less script steps. One key difference is that this can allow for somewhat more dynamic control and it will apply for some other job configuration keys that use environment variables. For example, you can set environment:url dynamically this way.

Resources