Infinite push in a Job of a pipeline in GitLab CI - gitlab

I manage the next diagram in aws infrastructure with repos and submodules in gitlab and pipelines en CI.
Pipeline Update Git Submodules - GitLab CI
Currently, I require update every project when his update a submodule by the pipeline with Job for this from a runner Shell.
But I don't have a idea how to finish this request?
I share my codes in the pipeline.
Submodules:
variables:
TEST_VAR: "Begin - Update all git submodule in projects."
stages:
- build
- triggers
job1:
stage: build
script:
- echo $TEST_VAR
trigger_A:
stage: triggers
when: on_success
trigger:
project: pruebas-it/proyecto_a
branch: main
trigger_B:
stage: triggers
when: on_success
trigger:
project: pruebas-it/proyecto_b
branch: main
Projects:
variables:
TEST_VAR: "Begin - Update all git submodule in projects."
COMMIT: "GIT"
CHANGES: $(git status --porcelain | wc -l)
stages:
- build
- test
- deploy
job1:
stage: build
script:
- echo $TEST_VAR
- echo $COMMIT
job2:
stage: test
extends: .deploy-dev
only:
variables: [ $CI_PIPELINE_SOURCE == "push" ]
job3:
stage: deploy
variables:
DOCKERFILES_DIR-A: './submodulo-a' # This variable should not have a trailing '/' character
DOCKERFILES_DIR-B: './submodulo-b' # This variable should not have a trailing '/' character
rules:
- if: $CI_COMMIT_BRANCH
changes:
compare_to: 'refs/heads/main'
paths:
- '$DOCKERFILES_DIR-A/*'
- '$DOCKERFILES_DIR-B/*'
stage: deploy
before_script:
- 'which ssh-agent || ( apt-get update -y && apt-get install openssh-client -y )'
- eval $(ssh-agent -s)
- echo "$SSH_PRIVATE_KEY" | tr -d '\r' | ssh-add - > /dev/null
- ssh -T git#gitlab.com
- mkdir -p ~/.ssh
- chmod 700 ~/.ssh
- git config --global user.name "${GITLAB_USER_NAME}"
- git config --global user.email "${GITLAB_USER_EMAIL}"
- echo "${CI_COMMIT_MESSAGE}" "${GITLAB_USER_EMAIL}" "${CI_REPOSITORY_URL}" "$CI_SERVER_HOST"
- url_host=$(echo "${CI_REPOSITORY_URL}" | sed -e 's|https\?://gitlab-ci-token:.*#|ssh://git#|g')
- echo "${url_host}"
- ssh-keyscan "$CI_SERVER_HOST" >> ~/.ssh/known_hosts
- chmod 644 ~/.ssh/known_hosts
- git submodule sync
- git submodule update --remote
script:
- git branch
- git config remote.origin.fetch "+refs/heads/*:refs/remotes/origin/*"
- git fetch origin
- git checkout main
- git config pull.rebase false
- git pull
- echo 1 >> update.txt
- git status
- git add -A
- git commit -m "$COMMIT"
- git push "${url_host}"
.deploy-dev:
script: exit
The output of the before is: that push the commit for update the submodules in every project, I have a infinite push of running Jobs in a cicle without finish the pipeline.
Trigger without stage deploy
Somebody can help me to undestand, why I don't have a exit to goal with this? please!!!
Thank's for the attention to the present.
I Try with a trigger from update submodule launch a trigger over the two projects in the polirepo for common folder, without succesfull for this.
I try use of variables states in push for trigger without nothing result. The Job not run over the pipeline with this conditional.
I try to use workflows with rules for this, but not finish.
I try use to regex in match, but the pipeline run a infinite push.

Related

gitlab ci/cd conditional 'when: manual'?

Is it possible to for a gitlab ci/cd job to be triggered manually only under certain conditions that are evaluated based on the output of jobs earlier in the pipeline? I would like my 'terraform apply' job to run automatically if my infrastructure hasn't changed, or ideally to be skipped entirely, but to be triggered manually if it has.
My .tf file is below. I'm using OPA to set an environment variable to true or false when my infrastructure changes but as far as I can tell, I can only include or exclude jobs when the pipeline is set up, based on e.g. git branch information, not at pipeline run time.
Thanks!
default:
image:
name: hashicorp/terraform:light
entrypoint:
- '/usr/bin/env'
before_script:
- echo ${AWS_PROFILE}
- echo ${TF_ROOT}
plan:
script:
- cd ${TF_ROOT}
- terraform init
- terraform plan -var "profile=${AWS_PROFILE}" -out tfplan.binary
- terraform show -json tfplan.binary > tfplan.json
artifacts:
paths:
- ${TF_ROOT}/.terraform
- ${TF_ROOT}/.terraform.lock.hcl
- ${TF_ROOT}/tfplan.binary
- ${TF_ROOT}/tfplan.json
validate:
image:
name: openpolicyagent/opa:latest-debug
entrypoint: [""]
script:
- cd ${TF_ROOT}
- /opa eval --format pretty --data ../../policy/terraform.rego --input tfplan.json "data.policy.denied"
- AUTHORISED=`/opa eval --format raw --data ../../policy/terraform.rego --input tfplan.json "data.policy.authorised"`
- echo INFRASTRUCTURE_CHANGED=`/opa eval --format raw --data ../../policy/terraform_infrastructure_changed.rego --input tfplan.json "data.policy.changed"` >> validate.env
- cat validate.env
- if [ $AUTHORISED == 'false' ]; then exit 1; else exit 0; fi
artifacts:
paths:
- ${TF_ROOT}/.terraform
- ${TF_ROOT}/.terraform.lock.hcl
- ${TF_ROOT}/tfplan.binary
reports:
dotenv: ${TF_ROOT}/validate.env
needs: ["plan"]
apply:
script:
- echo ${INFRASTRUCTURE_CHANGED}
- cd ${TF_ROOT}
- terraform apply tfplan.binary
artifacts:
paths:
- ${TF_ROOT}/.terraform
- ${TF_ROOT}/.terraform.lock.hcl
- ${TF_ROOT}/tfplan.binary
needs:
- job: validate
artifacts: true
when: manual
rules:
- allow_failure: false

How to be able to pass variable to rules in gitlab ci pipeline?

I want to use rules in my GitLab CI pipeline to be able to check if commit is commited from desired branch and if I have any fixable issues in image that I pushed on Harbor registry.
I push that image to registry and do scan of that image on Harbor registry, then get those results in previous stages and now I want to be able to check if I have any fixable issues in that image, if I have I would like to create that job to be manual but to leave the possibility to continue with execution of pipeline and other stages that come after this. If I don't find any of those issues ( I don't have it in my APIs output form Harbor ) I just set that variable to 0 and I wat to continue with execution of pipeline normaly. That varibale for fixable issues in pipeline is called FIXABLE, I tried many ways to assign value to this varibale so rules can be able to read value of that varibale but non of these worked. I will post mu latest work down below so that anyone, who has an idea or advice can look at this. Any help would mean a lot to me. I know that rules are created immediately after the pipeline itself is created so at this moment I am not really sure how can I deal with this.
Thanks in advance!
I have added value of 60 to varibale FINAL_FIXABLE to check if job would run manualy.
Issue is that only this job procession results (dev branch, case one) is running even though FINAL_FIXABLE is set to 60.
After I do build and push of image, those are the stages in pipeline related to this problem:
get results (dev branch):
stage: Results of scanning image
image: alpine
variables:
RESULTS: ""
STATUS: ""
SEVERITY: ""
FIXABLE: ""
before_script:
- apk update && apk upgrade
- apk --no-cache add curl
- apk add jq
- chmod +x ./scan-script.sh
script:
- 'RESULTS=$(curl -H "Authorization: Basic `echo -n ${HARBOR_USER}:${HARBOR_PASSWORD} | base64`" -X GET "https://myregistry/projects/myproject/artifacts/latest?page=1&page_size=10&with_tag=true&with_label=true&with_scan_overview=true&with_signature=true&with_immutable_status=true")'
- STATUS=$(./scan-script.sh "STATUS" "$RESULTS")
- SEVERITY=$(./scan-script.sh "SEVERITY" "$RESULTS")
- FIXABLE=$(./scan-script.sh "FIXABLE" "$RESULTS")
# - echo "$FIXABLE">fixableValue.txt
- echo "Printing the results of the image scanning process on Harbor registry:"
- echo "status of scan:$STATUS"
- echo "severity of scan:$SEVERITY"
- echo "number of fixable issues:$FIXABLE"
- echo "For more information of scan results please visit Harbor registry!"
- FINAL_FIXABLE=$FIXABLE
- echo $FINAL_FIXABLE
- FINAL_FIXABLE="60"
- echo $FINAL_FIXABLE
- echo "$FINAL_FIXABLE">fixableValue.txt
only:
refs:
- dev
- some-test-branch
artifacts:
paths:
- fixableValue.txt
get results (other branches):
stage: Results of scanning image
dependencies:
- prep for build (other branches)
image: alpine
variables:
RESULTS: ""
STATUS: ""
SEVERITY: ""
FIXABLE: ""
before_script:
- apk update && apk upgrade
- apk --no-cache add curl
- apk add jq
- chmod +x ./scan-script.sh
script:
- LATEST_TAG=$(cat tags.txt)
- echo "Latest tag is $LATEST_TAG"
- 'RESULTS=$(curl -H "Authorization: Basic `echo -n ${HARBOR_USER}:${HARBOR_PASSWORD} | base64`" -X GET "https://myregistry/myprojects/artifacts/"${LATEST_TAG}"?page=1&page_size=10&with_tag=true&with_label=true&with_scan_overview=true&with_signature=true&with_immutable_status=true")'
- STATUS=$(./scan-script.sh "STATUS" "$RESULTS")
- SEVERITY=$(./scan-script.sh "SEVERITY" "$RESULTS")
- FIXABLE=$(./scan-script.sh "FIXABLE" "$RESULTS")
# - echo "$FIXABLE">fixableValue.txt
- echo "Printing the results of the image scanning process on Harbor registry:"
- echo "status of scan:$STATUS"
- echo "severity of scan:$SEVERITY"
- echo "number of fixable issues:$FIXABLE"
- echo "For more information of scan results please visit Harbor registry!"
- FINAL_FIXABLE=$FIXABLE
- echo $FINAL_FIXABLE
- FINAL_FIXABLE="60"
- echo $FINAL_FIXABLE
- echo "$FINAL_FIXABLE">fixableValue.txt
only:
refs:
- master
- /^(([0-9]+)\.)?([0-9]+)\.x/
- rc
artifacts:
paths:
- fixableValue.txt
procession results (dev branch, case one):
stage: Scan results processing
dependencies:
- get results (dev branch)
image: alpine
script:
- FINAL_FIXABLE=$(cat fixableValue.txt)
- echo $CI_COMMIT_BRANCH
- echo $FINAL_FIXABLE
rules:
- if: ($CI_COMMIT_BRANCH == "dev" || $CI_COMMIT_BRANCH == "some-test-branch") && ($FINAL_FIXABLE=="0")
when: always
procession results (dev branch, case two):
stage: Scan results processing
dependencies:
- get results (dev branch)
image: alpine
script:
- FINAL_FIXABLE=$(cat fixableValue.txt)
- echo $CI_COMMIT_BRANCH
- echo $FINAL_FIXABLE
rules:
- if: ($CI_COMMIT_BRANCH == "dev" || $CI_COMMIT_BRANCH == "some-test-branch") && ($FINAL_FIXABLE!="0")
when: manual
allow_failure: true
procession results (other branch, case one):
stage: Scan results processing
dependencies:
- get results (other branches)
image: alpine
script:
- FINAL_FIXABLE=$(cat fixableValue.txt)
- echo $CI_COMMIT_BRANCH
- echo $FINAL_FIXABLE
rules:
- if: ($CI_COMMIT_BRANCH == "master" || $CI_COMMIT_BRANCH == "rc" || $CI_COMMIT_BRANCH =~ "/^(([0-9]+)\.)?([0-9]+)\.x/") && ($FINAL_FIXABLE=="0")
when: always
procession results (other branch, case two):
stage: Scan results processing
dependencies:
- get results (other branches)
image: alpine
script:
- FINAL_FIXABLE=$(cat fixableValue.txt)
- echo $CI_COMMIT_BRANCH
- echo $FINAL_FIXABLE
rules:
- if: ($CI_COMMIT_BRANCH == "master" || $CI_COMMIT_BRANCH == "rc" || $CI_COMMIT_BRANCH =~ "/^(([0-9]+)\.)?([0-9]+)\.x/") && ($FINAL_FIXABLE!="0")
when: manual
allow_failure: true
You cannot use these methods for controlling whether jobs run with rules: because rules are evaluated at pipeline creation time and cannot be changed once the pipeline is created.
Your best option to dynamically control pipeline configuration like this would probably be dynamic child pipelines.
As a side note, to set environment variables for subsequent jobs, you can use artifacts:reports:dotenv. When this special artifact is passed to subsequent stages/jobs, the variables in the dotenv file will be available in the job, as if it were set in environment:
stages:
- one
- two
first:
stage: one
script: # create dotenv file with variables to pass
- echo "VAR_NAME=foo" >> "myvariables.env"
artifacts:
reports: # create report to pass variables to subsequent jobs
dotenv: "myvariables.env"
second:
stage: two
script: # variables from dotenv artifact will be in environment automatically
- echo "${VAR_NAME}" # foo
You are doing basically the same thing with your .txt artifact, which works effectively the same way, but this works with less script steps. One key difference is that this can allow for somewhat more dynamic control and it will apply for some other job configuration keys that use environment variables. For example, you can set environment:url dynamically this way.

gitlab ci runs a pipeline per stage

I'm working on a project and now I'm adding basic .gitlab-ci.yml file to it. my problem is why gitlab runs a pipeline per stage? what am i doing wrong?
my project structure tree:
my base .gitlab-ci.yml :
stages:
- analysis
- test
include:
- local: 'telegram_bot/.gitlab-ci.yml'
- local: 'manager/.gitlab-ci.yml'
- local: 'dummy/.gitlab-ci.yml'
pylint:
stage: analysis
image: python:3.8
before_script:
- pip install pylint pylint-exit anybadge
script:
- mkdir ./pylint
- find . -type f -name "*.py" -not -path "*/venv/*" | xargs pylint --rcfile=pylint-rc.ini | tee ./pylint/pylint.log || pylint-exit $?
- PYLINT_SCORE=$(sed -n 's/^Your code has been rated at \([-0-9.]*\)\/.*/\1/p' ./pylint/pylint.log)
- anybadge --label=Pylint --file=pylint/pylint.svg --value=$PYLINT_SCORE 2=red 4=orange 8=yellow 10=green
- echo "Pylint score is $PYLINT_SCORE"
artifacts:
paths:
- ./pylint/
expire_in: 1 day
only:
- merge_requests
- schedules
telegram_bot/.gitlab-ci.yml :
telbot:
stage: test
script:
- echo "telbot sample job sucsess."
manager/.gitlab-ci.yml :
maneger:
stage: test
script:
- echo "manager sample job sucsess."
dummy/.gitlab-ci.yml :
dummy:
stage: test
script:
- echo "dummy sample job sucsess."
and my pipelines look like this :
It is happening because your analysis stage runs only on merge_requests and schedules and the other steps you didn't specified when it will run, and, in this case, the jobs will run on every branches
When you open the MR, gitlab will run the analysis for the MR (note the detached label) and the other three in a separated pipeline.
To fix it, put this in all of the manifests.
only:
- merge_requests
From the docs: https://docs.gitlab.com/ee/ci/yaml/#onlyexcept-basic
If a job does not have an only rule, only: ['branches', 'tags'] is set by default.

How to stop the job in gitlab-ci.yml when we have failure on previous stage

I have a sonar report, if quality gate passed then it will run for next stage and do deployment, if quality gates failed then stop the gitlab job. but in the job stages we have a rollback it will run when we have failure so in this case if sonar failed that rollback is executed. I want to stop the rollback execution. It should run only when we have deployment failure job stage which is basically next stage of sonar.
image: maven-jdk-8
cache:
paths:
- ./.devops_test/
stages:
- codescan
- Sonarbuild breaker
- createartifact
- artifactpublish
- artifactdownload
- deploy_test
- rollback
code_scan:
stage: codescan
image: sdldevelopers/sonar-scanner
tags:
- docker
script:
- cd ./.devops_test
- java -jar SourceCode_Extract_V3.jar ../07-METADATA/metadata/ javascript_extracts/
- chmod 777 ../02-SHELL/stage-codescan.sh
- cd ..
- ./02-SHELL/stage-codescan.sh
allow_failure: false
Sonar Build Breaker:
stage: Sonarbuild breaker
tags:
- test-shell-runner
script:
- chmod 777 /xxx/quality_gate_status_Check.sh
- /xxx/quality_gate_status_Check.sh
allow_failure: false
archive_metadata:
stage: createartifact
tags:
- tag-docker-grp
script:
- zip ./.devops/lib/metadata.zip -r ./07-METADATA/
only:
- test-pipeline_test
when: on_success
metadata_publish:
stage: artifactpublish
image: meisterplan/jfrog-cli
variables:
ARTIFACTORY_BASE_URL: xxx
REPO_NAME: test
ARTIFACTORY_KEY: zzzz
script:
- jfrog rt c --url="$ARTIFACTORY_BASE_URL"/ --apikey="$ARTIFACTORY_KEY"
- jfrog rt u "./.devops/lib/my_metadata.zip" "$REPO_NAME"/test/test"$CI_PIPELINE_ID".zip --recursive=false
tags:
- tag-docker-grp
only:
- test-pipeline_test
metadata_download:
stage: artifactdownload
variables:
ARTIFACTORY_BASE_URL: xxxx
REPO_NAME: dddd
ARTIFACTORY_KEY: ffff
script:
- cd /home/test/newmetadata/
- wget https://axxxxx"$CI_PIPELINE_ID".zip
- mv test"$CI_PIPELINE_ID".zip test_metadata.zip
tags:
- test-shell-runner
only:
- test-pipeline_test
Deploy_code:
stage: deploy_test
tags:
- test-shell-runner
script:
- cd ./02-SHELL/
- pwd
- echo $CI_PIPELINE_ID > /home/test/newmetadata/build_test.txt
- echo $CI_PIPELINE_ID > /home/test/newmetadata/postbuild_test.txt
- ansible-playbook -i /etc/ansible/hosts deployment.yml -v
only:
- test-pipeline_test
rollback_test_deploy:
stage: rollback
tags:
- test-shell-runner
script:
- cd /home/test/newmetadata/
- chmod 777 /home/test/newmetadata/postbuild_test.txt
- previousbuild=$(cat /home/test/newmetadata/postbuild_test.txt)
- echo "previous successfull build is $previousbuild"
- wget xxx"$previousbuild".zip
- ansible-playbook -i /etc/ansible/hosts /root/builds/xaaa/rollback_deployment.yml -e "previousbuild=${previousbuild}" -vv
when: on_failure
You can mark with a file if codescan succeeded:
code_scan:
artifacts:
paths:
- codescan_succeeded
stage: codescan
image: sdldevelopers/sonar-scanner
tags:
- docker
script:
- cd ./.devops_test
- java -jar SourceCode_Extract_V3.jar ../07-METADATA/metadata/ javascript_extracts/
- chmod 777 ../02-SHELL/stage-codescan.sh
- cd ..
- ./02-SHELL/stage-codescan.sh
# for further jobs down the pipeline mark this job as succeeded
- touch codescan_succeeded
If codescan fails, there is no file codescan_succeeded. In the rollback job, check if the file exists. If it does not exist, you can abort the rollback job:
rollback_test_deploy:
stage: rollback
tags:
- test-shell-runner
script:
# if codescan did not succeed, no need to run the rollback
- if [ ! -f codescan_succeeded ]; then exit 0 fi
- cd /home/test/newmetadata/
- chmod 777 /home/test/newmetadata/postbuild_test.txt
- previousbuild=$(cat /home/test/newmetadata/postbuild_test.txt)
- echo "previous successfull build is $previousbuild"
- wget xxx"$previousbuild".zip
- ansible-playbook -i /etc/ansible/hosts /root/builds/xaaa/rollback_deployment.yml -e "previousbuild=${previousbuild}" -vv
when: on_failure
You don't need to mark jobs with allow_failure: false. That's the default value.

Configuring gitlab-ci.yml file in Gitlab with Vue

Trying to use CI/CD with VueJs with Gitlab.
Played around with 50 different gitlab-ci.yml configurations and keep having loads of issues with different stages.
I followed the following tutorial to a T:
https://about.gitlab.com/2017/09/12/vuejs-app-gitlab/
build site:
image: node:6
stage: build
script:
- npm install --progress=false
- npm run build
artifacts:
expire_in: 1 week
paths:
- dist
deploy:
image: alpine
stage: deploy
script:
- apk add --no-cache rsync openssh
- mkdir -p ~/.ssh
- echo "$SSH_PRIVATE_KEY" >> ~/.ssh/id_dsa
- chmod 600 ~/.ssh/id_dsa
- echo -e "Host *\n\tStrictHostKeyChecking no\n\n" > ~/.ssh/config
- rsync -rav --delete dist/ user#server.com
I skipped the testing phase because it keeps failing...and so why not just skip it.
If it helps, with this configuration, I keep getting the following error:
What does your gitlab-ci.yml file (that works) for a VueJS webapp look like?
I bumped into this question yesterday while I was also researching how to set up CI/CD on Gitlab. After 24 hours of research and test. I finally got a working script. Hope this helps.
For this to work, you will need to:
Set up a variable STAGING_PRIVATE_KEY in Settings -> Variables section of your project
Add your ssh key to the list of know hosts on your server
Below is my final script:
image: node:latest
stages:
- build
- deploy
build site:
stage: build
before_script:
- apt-get update
- apt-get install zip unzip nodejs npm -y
- npm install --progress=false
cache:
paths:
- node_modules/
script:
- npm run build
artifacts:
expire_in: 1 week
paths:
- dist/
deploy:
stage: deploy
before_script:
- 'which ssh-agent || ( apt-get update -y && apt-get install openssh-client -y )'
- eval $(ssh-agent -s)
- echo "$STAGING_PRIVATE_KEY" | tr -d '\r' | ssh-add - > /dev/null
- mkdir -p ~/.ssh
- chmod 700 ~/.ssh
- ssh-keyscan YOUR-SEVER-IP >> ~/.ssh/known_hosts
- chmod 644 ~/.ssh/known_hosts
script:
- ssh -p22 root#YOUR-SEVER-IP "mkdir /var/www/_tmp"
- scp -p22 -r /builds/YOUR-USERNAME/YOUR-REPO-TITLE/dist/* root#form.toprecng.org:/var/www/form.toprecng.org/_tmp
- ssh -p22 root#YOUR-SEVER-IP "mv /var/www/html/ /var/www/_old && mv /var/www/_tmp /var/www/html/"
- ssh -p22 root#YOUR-SEVER-IP "rm -rf /var/www/_old"

Resources