Command not found on gitlab ci using cat - gitlab

I have a Gitlab job in which I get a value from a .txt file. This value (v100322.1) was written into the text file in a previous stage and passing by to the job through artifacts.
When I try to get value from the file with cat command I get this error on the pipeline:
$ $PACKAGE_VERSION=$(cat build.txt)
+++ cat build.txt
++ $'=\377\376v100322.1\r'
bash: line 132: $'=\377\376v100322.1\r': command not found
And this is my YAML file for GitLab-CI:
stages:
- deploy
- trigger
.deploy_job_base:
stage: deploy
tags:
- dotnet
script:
- $PACKAGE_VERSION="v100322.1"
- ${PACKAGE_VERSION} > build.txt
artifacts:
expire_in: 1 week
paths:
- build.txt
allow_failure: false
deploy_job_sport:
extends: .deploy_job_base
deploy_job_TestClient:
extends: .deploy_job_base
# trigger GitLab API call
.trigger_base:
stage: trigger
script:
- $PACKAGE_VERSION=$(cat build.txt)
- 'curl --include --fail --request POST --form "token=$CI_JOB_TOKEN" --form "PACKAGE_VERSION=$PACKAGE_VERSION" --form "ref=feature/1000" $GITLAB_BASE_URL/api/v4/projects/$APP_PROJECT_ID/trigger/pipeline'
trigger_sport:
extends: .trigger_base
variables:
APP_PROJECT_ID: "2096"
needs: [deploy_job_sport]
dependencies:
- deploy_job_sport
trigger_TestClient:
extends: .trigger_base
variables:
APP_PROJECT_ID: "2110"
needs: [deploy_job_TestClient]
dependencies:
- deploy_job_TestClient
Do you know which is the problem here?
Thanks in advance.

The cause is the syntax, you can always check it on a virtual box, or better, pull the docker image down that the job will be executed on, and test the script in there
(using docker run -it ${JOB_DOCKER_IMAGE} /bin/bash for instance).
I just tested your script and got this:
you can clearly see bash will not like the $ in front of PACKAGE_VERSION and interpret it, like a command ...
But you can turn the script of .deploy_job_base into a one-liner like this:
you circumnavigate the need for defining a variable and just dump it in the build.txt file.

Related

gitlab ci/cd conditional 'when: manual'?

Is it possible to for a gitlab ci/cd job to be triggered manually only under certain conditions that are evaluated based on the output of jobs earlier in the pipeline? I would like my 'terraform apply' job to run automatically if my infrastructure hasn't changed, or ideally to be skipped entirely, but to be triggered manually if it has.
My .tf file is below. I'm using OPA to set an environment variable to true or false when my infrastructure changes but as far as I can tell, I can only include or exclude jobs when the pipeline is set up, based on e.g. git branch information, not at pipeline run time.
Thanks!
default:
image:
name: hashicorp/terraform:light
entrypoint:
- '/usr/bin/env'
before_script:
- echo ${AWS_PROFILE}
- echo ${TF_ROOT}
plan:
script:
- cd ${TF_ROOT}
- terraform init
- terraform plan -var "profile=${AWS_PROFILE}" -out tfplan.binary
- terraform show -json tfplan.binary > tfplan.json
artifacts:
paths:
- ${TF_ROOT}/.terraform
- ${TF_ROOT}/.terraform.lock.hcl
- ${TF_ROOT}/tfplan.binary
- ${TF_ROOT}/tfplan.json
validate:
image:
name: openpolicyagent/opa:latest-debug
entrypoint: [""]
script:
- cd ${TF_ROOT}
- /opa eval --format pretty --data ../../policy/terraform.rego --input tfplan.json "data.policy.denied"
- AUTHORISED=`/opa eval --format raw --data ../../policy/terraform.rego --input tfplan.json "data.policy.authorised"`
- echo INFRASTRUCTURE_CHANGED=`/opa eval --format raw --data ../../policy/terraform_infrastructure_changed.rego --input tfplan.json "data.policy.changed"` >> validate.env
- cat validate.env
- if [ $AUTHORISED == 'false' ]; then exit 1; else exit 0; fi
artifacts:
paths:
- ${TF_ROOT}/.terraform
- ${TF_ROOT}/.terraform.lock.hcl
- ${TF_ROOT}/tfplan.binary
reports:
dotenv: ${TF_ROOT}/validate.env
needs: ["plan"]
apply:
script:
- echo ${INFRASTRUCTURE_CHANGED}
- cd ${TF_ROOT}
- terraform apply tfplan.binary
artifacts:
paths:
- ${TF_ROOT}/.terraform
- ${TF_ROOT}/.terraform.lock.hcl
- ${TF_ROOT}/tfplan.binary
needs:
- job: validate
artifacts: true
when: manual
rules:
- allow_failure: false

Import external yaml file and use them as environment variable in GITLAB

I have a rest_config.yaml file which looks loke this:
host: abcd
apiKey: abcd
secretKey: abcd
I want to import these in my .gitlab-ci.yaml file and use them as my environment variable. How do I do so?
If your yaml file is part of the checked out repository on which the gitlab-ci.yaml pipeline operates, said pipeline can read the file in a script: section, as I illustrated here.
That script: section can set environment variables.
And you can pass variables explicitly between jobs
build:
stage: build
script:
- VAR1=foo
- VAR2=bar
- echo export VAR1="${VAR1}" > $CI_PROJECT_DIR/variables
- echo export VAR2="${VAR2}" >> $CI_PROJECT_DIR/variables
artifacts:
paths:
- variables
test:
stage: test
script:
- source $CI_PROJECT_DIR/variables
- echo VAR1 is $VAR1
- echo VAR2 is $VAR2
build:
stage: build
script: - VAR1=foo
echo export VAR2="${VAR2}" >> $CI_PROJECT_DIR/variables
artifacts:
paths:
variables
test:
stage: test
script:
source $CI_PROJECT_DIR/variables

Is there a way to dynamically choose whether a job is run in a Gitlab CI pipeline?

I am trying to have one job check for a word being present in a config file and have that determine whether a subsequent trigger job occurs or not...
Like so...
stages:
- check
- trigger_pipeline
variables:
- TRIGGER_JOB : "0"
- CONFIG_FILE : "default.json"
check:
stage: check
script:
- |
if grep -q keyword "$CONFIG_FILE"; then
TRIGGER_JOB="1"
fi
- echo "TRIGGER_JOB=${TRIGGER_JOB}" >> variables.env
artifacts:
reports:
dotenv: "variables.env"
trigger_pipeline:
stage: trigger_pipeline
rules:
- if: '$TRIGGER_JOB == "1"'
trigger:
project: downstream/project
branch: staging
strategy: depend
needs: ["check"]
It seems like I've reached a limitation with GitLab because the trigger_pipeline job doesn't even get created due to the fact that the pipeline initializes with TRIGGER_JOB: "0" so it doesn't matter that I'm doing this check to trigger the pipeline later if the keyword was found.
Is there any way to dynamically decide if this trigger_pipeline job would be created or not?
I would just put it all in one job and trigger the downstream pipeline through the API, but then of course, I can't depend on the downstream status which is something I want as well (and isn't possible to do when triggering through the API from everything I've found in the docs).
Any advice would be appreciated. Thanks!
The closest thing to what you're describing is dynamic child pipelines. That would allow you to create a pipeline configuration dynamically in one job, then run it.
generate-config:
stage: build
script: generate-ci-config > generated-config.yml
artifacts:
paths:
- generated-config.yml
child-pipeline:
stage: test
trigger:
include:
- artifact: generated-config.yml
job: generate-config
There is a way to dynamically choose whether or not to execute a gitlab job
In your case apply the following config
stages:
- trigger_pipeline
- check
variables:
- CONFIG_FILE : "default.json"
check:
stage: check
script:
- |
if grep -q keyword "$CONFIG_FILE"; then
curl -s --header "PRIVATE-TOKEN: <your_access_token>" "https://gitlab.com/api/v4/projects/$CI_PROJECT_ID/pipelines/$CI_PIPELINE_ID/jobs" | jq '.[]'
JOB_ID=$(curl -s --header "PRIVATE-TOKEN: <your_access_token>" "https://gitlab.com/api/v4/projects/$CI_PROJECT_ID/pipelines/$CI_PIPELINE_ID/jobs" | jq '.[] | select(.name=="child-pipeline") | .id')
curl --request POST --header "PRIVATE-TOKEN: <your_access_token>" "https://gitlab.com/api/v4/projects/$CI_PROJECT_ID/jobs/$JOB_ID/play"
fi
trigger_pipeline:
stage: trigger_pipeline
trigger:
project: downstream/project
branch: staging
strategy: depend
when: manual
The logic behind is that you configure your target job, in your case trigger_pipeline as manual
Change the stage ordering so that your target job is set up first
Then if your logic evaluates to true in your case
grep -q keyword "$CONFIG_FILE"
Then the check job basically executes the target job. By firstly identifying the target job id and finally executing by calling the play endpoint of the Gitlab API

gitlab ci runs a pipeline per stage

I'm working on a project and now I'm adding basic .gitlab-ci.yml file to it. my problem is why gitlab runs a pipeline per stage? what am i doing wrong?
my project structure tree:
my base .gitlab-ci.yml :
stages:
- analysis
- test
include:
- local: 'telegram_bot/.gitlab-ci.yml'
- local: 'manager/.gitlab-ci.yml'
- local: 'dummy/.gitlab-ci.yml'
pylint:
stage: analysis
image: python:3.8
before_script:
- pip install pylint pylint-exit anybadge
script:
- mkdir ./pylint
- find . -type f -name "*.py" -not -path "*/venv/*" | xargs pylint --rcfile=pylint-rc.ini | tee ./pylint/pylint.log || pylint-exit $?
- PYLINT_SCORE=$(sed -n 's/^Your code has been rated at \([-0-9.]*\)\/.*/\1/p' ./pylint/pylint.log)
- anybadge --label=Pylint --file=pylint/pylint.svg --value=$PYLINT_SCORE 2=red 4=orange 8=yellow 10=green
- echo "Pylint score is $PYLINT_SCORE"
artifacts:
paths:
- ./pylint/
expire_in: 1 day
only:
- merge_requests
- schedules
telegram_bot/.gitlab-ci.yml :
telbot:
stage: test
script:
- echo "telbot sample job sucsess."
manager/.gitlab-ci.yml :
maneger:
stage: test
script:
- echo "manager sample job sucsess."
dummy/.gitlab-ci.yml :
dummy:
stage: test
script:
- echo "dummy sample job sucsess."
and my pipelines look like this :
It is happening because your analysis stage runs only on merge_requests and schedules and the other steps you didn't specified when it will run, and, in this case, the jobs will run on every branches
When you open the MR, gitlab will run the analysis for the MR (note the detached label) and the other three in a separated pipeline.
To fix it, put this in all of the manifests.
only:
- merge_requests
From the docs: https://docs.gitlab.com/ee/ci/yaml/#onlyexcept-basic
If a job does not have an only rule, only: ['branches', 'tags'] is set by default.

How to safely login to private docker registry in gitlab?

I know there are secret variables and I tried passing the secret to a bash script.
When used on a bash script that has #!/bin/bash -x the password can be seen in clear text when using the docker login command like this:
docker login "$USERNAME" "$PASSWORD" $CONTAINERREGISTRY
Is there a way to safely login to a container registry in gitlab-ci?
You can use before_script at the beginning of the gitlab-ci.yml file or inside each job if you need several authentifications:
before_script:
- echo "$CI_REGISTRY_PASSWORD" | docker login -u "$CI_REGISTRY_USER" --password-stdin
Where $CI_REGISTRY_USER and CI_REGISTRY_PASSWORD would be secret variables.
And after each script or at the beginning of the whole file:
after_script:
- docker logout
I wrote an answer about using Gitlab CI and Docker to build docker images :
https://stackoverflow.com/a/50684269/8247069
GitLab provides an array of environment variables when running a job. You'll want to become familiar and use them while developing (running test builds and such) so that you won't need to do anything except set the CI/CD variables in GitLab accordingly (like ENV) and Gitlab will provide most of what you'd want. See GitLab Environment variables.
Just a minor tweak on what has been suggested previously (combining the GitLab suggested with this one.)
For more information on where/how to use before_script and after_script, see .gitlab-ci-yml Configuration parameters I tend to put my login command as one of the last in my main before_script (not in the stages) and my logout in a final "after_script".
before_script:
- echo "$CI_REGISTRY_PASSWORD" | docker login "$CI_REGISTRY" -u "$CI_REGISTRY_USER" --password-stdin;
Then futher down your .gitlab-ci.yml...
after_script:
- docker logout;
For my local development, I create a .env file that follows a common convention then the following bash snippet will check if the file exists and import the values into your shell. To make my project secure AND friendly, .env is ignored, but I maintain a .env.sample with safe example values and I DO include that.
if [ -f .env ]; then printf "\n\n::Sourcing .env\n" && set -o allexport; source .env; set +o allexport; fi
Here's a mostly complete example:
image: docker:19.03.9-dind
stages:
- buildAndPublish
variables:
DOCKER_TLS_CERTDIR: "/certs"
DOCKER_DRIVER: overlay2
services:
- docker:19.03.9-dind
before_script:
- printf "::GitLab ${CI_BUILD_STAGE} stage starting for ${CI_PROJECT_URL}\n";
- printf "::JobUrl=${CI_JOB_URL}\n";
- printf "::CommitRef=${CI_COMMIT_REF_NAME}\n";
- printf "::CommitMessage=${CI_COMMIT_MESSAGE}\n\n";
- printf "::PWD=${PWD}\n\n";
- echo "$CI_REGISTRY_PASSWORD" | docker login $CI_REGISTRY -u $CI_REGISTRY_USER --password-stdin;
build-and-publish:
stage: buildAndPublish
script:
- buildImage;
- publishImage;
rules:
- if: '$CI_COMMIT_REF_NAME == "master"' # Run for master, but not otherwise
when: always
- when: never
after_script:
- docker logout registry.gitlab.com;

Resources