how to differentiate when the MR is accepted - gitlab

I'd like to have a branch in the deployment script of my .gitlab-ci.yml file that is based on whether the particular pipeline is run because a MR has been accepted and is being merged into the default branch.
For instance,
Deploy:
stage: deploy
image: alpine
script:
- apk add openssh-client
- install -d -D -m 700 -p ~/.ssh
- eval $(ssh-agent -s)
- cat "${MY_SSH_PRIVATE_KEY}" | ssh-add -
- if [ "${CI_DEFAULT_BRANCH}" = "${CI_COMMIT_BRANCH}" ]; then \
rsync $BUILD_DIR/myfile.tar.gz filestore1 ; \
else \
rsync $BUILD_DIR/myfile.tar.gz filestore2 ; \
fi
I don't think that the comparison of CI_DEFAULT* with CI_COMMIT* is a safe comparison. I believe this works anytime a commit is on the default branch, not when being merged, so I think I'm missing an important distinction and/or a best-practice.
My intent is that so long as the MR is still in dev, it should be pushed to filestore2, and only pushed to the production filestore1 when the MR is good, tests good/complete, and is accepted.
Related:
Gitlab: How to run a deploy job on a tagged version / release? requires me to tag when I need to release something; this may be something I work into my workflow, but is not our current workflow

The code you have makes sense to fulfill the purpose you described and should work as-is (assuming the bash code is otherwise well-formed).
When changes are merged to the default branch, CI_COMMIT_BRANCH will be the same as CI_DEFAULT_BRANCH and the condition you have would evaluate to true, causing it to deploy to filestore1 -- for pipelines running on any other branch, filestore2 would be used when this job runs.

Your code should be as follows, this way filestore1 will be pushed only when MR is merged to main or master, or else filestore2 will be pushed.
Deploy:
stage: deploy
image: alpine
script:
- apk add openssh-client
- install -d -D -m 700 -p ~/.ssh
- eval $(ssh-agent -s)
- cat "${MY_SSH_PRIVATE_KEY}" | ssh-add -
- if [ "${CI_COMMIT_BRANCH}" -eq "main" ] | [ "${CI_COMMIT_BRANCH}" -eq "master" ]; then \
rsync $BUILD_DIR/myfile.tar.gz filestore1 \
else \
rsync $BUILD_DIR/myfile.tar.gz filestore2 \
fi
The if statement checks for master or main here.

Related

Infinite push in a Job of a pipeline in GitLab CI

I manage the next diagram in aws infrastructure with repos and submodules in gitlab and pipelines en CI.
Pipeline Update Git Submodules - GitLab CI
Currently, I require update every project when his update a submodule by the pipeline with Job for this from a runner Shell.
But I don't have a idea how to finish this request?
I share my codes in the pipeline.
Submodules:
variables:
TEST_VAR: "Begin - Update all git submodule in projects."
stages:
- build
- triggers
job1:
stage: build
script:
- echo $TEST_VAR
trigger_A:
stage: triggers
when: on_success
trigger:
project: pruebas-it/proyecto_a
branch: main
trigger_B:
stage: triggers
when: on_success
trigger:
project: pruebas-it/proyecto_b
branch: main
Projects:
variables:
TEST_VAR: "Begin - Update all git submodule in projects."
COMMIT: "GIT"
CHANGES: $(git status --porcelain | wc -l)
stages:
- build
- test
- deploy
job1:
stage: build
script:
- echo $TEST_VAR
- echo $COMMIT
job2:
stage: test
extends: .deploy-dev
only:
variables: [ $CI_PIPELINE_SOURCE == "push" ]
job3:
stage: deploy
variables:
DOCKERFILES_DIR-A: './submodulo-a' # This variable should not have a trailing '/' character
DOCKERFILES_DIR-B: './submodulo-b' # This variable should not have a trailing '/' character
rules:
- if: $CI_COMMIT_BRANCH
changes:
compare_to: 'refs/heads/main'
paths:
- '$DOCKERFILES_DIR-A/*'
- '$DOCKERFILES_DIR-B/*'
stage: deploy
before_script:
- 'which ssh-agent || ( apt-get update -y && apt-get install openssh-client -y )'
- eval $(ssh-agent -s)
- echo "$SSH_PRIVATE_KEY" | tr -d '\r' | ssh-add - > /dev/null
- ssh -T git#gitlab.com
- mkdir -p ~/.ssh
- chmod 700 ~/.ssh
- git config --global user.name "${GITLAB_USER_NAME}"
- git config --global user.email "${GITLAB_USER_EMAIL}"
- echo "${CI_COMMIT_MESSAGE}" "${GITLAB_USER_EMAIL}" "${CI_REPOSITORY_URL}" "$CI_SERVER_HOST"
- url_host=$(echo "${CI_REPOSITORY_URL}" | sed -e 's|https\?://gitlab-ci-token:.*#|ssh://git#|g')
- echo "${url_host}"
- ssh-keyscan "$CI_SERVER_HOST" >> ~/.ssh/known_hosts
- chmod 644 ~/.ssh/known_hosts
- git submodule sync
- git submodule update --remote
script:
- git branch
- git config remote.origin.fetch "+refs/heads/*:refs/remotes/origin/*"
- git fetch origin
- git checkout main
- git config pull.rebase false
- git pull
- echo 1 >> update.txt
- git status
- git add -A
- git commit -m "$COMMIT"
- git push "${url_host}"
.deploy-dev:
script: exit
The output of the before is: that push the commit for update the submodules in every project, I have a infinite push of running Jobs in a cicle without finish the pipeline.
Trigger without stage deploy
Somebody can help me to undestand, why I don't have a exit to goal with this? please!!!
Thank's for the attention to the present.
I Try with a trigger from update submodule launch a trigger over the two projects in the polirepo for common folder, without succesfull for this.
I try use of variables states in push for trigger without nothing result. The Job not run over the pipeline with this conditional.
I try to use workflows with rules for this, but not finish.
I try use to regex in match, but the pipeline run a infinite push.

gitlab: what is the use of before script inside a job

I feel what is the need of using before_script in a job. It can be put together inside the script itself
deploy-to-stage:
stage: deploy
before_script:
- "which ssh-agent || ( apt-get update -y && apt-get install openssh-client -y )"
- eval $(ssh-agent -s)
- echo "$SSH_PRIVATE_KEY" | tr -d '\r' | ssh-add - > /dev/null
- mkdir -p ~/.ssh
- chmod 700 ~/.ssh
- echo "$SSH_KNOWN_HOSTS" > ~/.ssh/known_hosts
- chmod 644 ~/.ssh/known_hosts
- docker login -u gitlab-ci-token -p $CI_JOB_TOKEN registry.gitlab.com
script:
- *** some code here ***
If they are going to run one after another
I can understand having before_script common for all jobs, because it saves some boilerplate
Effectively, machine-wise the before_script content and the script content are concatenated and executed together in a single shell, but the jobs aren't (only) read by machines.
Let's put as example your current Job, and let's suppose that I have to maintain it in a future.
If I have to change something related to the way the new code is generated or how an image has to be deployed, I can just go to the script section because the Job is properly defined and I don't have anything to do related to the Git configuration. On the other hand, if you have everything on that aforementioned section, then I'll have to go through all the code when probably the part I'm interested on is at the end (but it could be the case that it isn't)
Of course, this only applies when the separation between before_script and script is properly set and not a random split without any consideration.

Gitlab-ci always lists the same user as triggerer

I've got a project involving multiple GitLab users, all at ownership level. I've got a gitlab-ci.yml file that creates a new tag and pushes the new tag to the repository. This was set up using a deploy key and ssh. The problem is, no matter who actually triggers the job, the same user is always listed as the triggerer, which causes some traceability problems.
Currently, the .yml looks something like this, taken from this link:
before_script:
- echo "$SSH_PRIVATE_KEY_TOOLKIT" | tr -d '\r' | ssh-add - > /dev/null
- mkdir -p ~/.ssh
- chmod 700 ~/.ssh
- ssh-keyscan $GITLAB_URL >> ~/.ssh/known_hosts
- chmod 644 ~/.ssh/known_hosts
- git config --global user.email $GITLAB_USER_EMAIL
- git config --global user.name $GITLAB_USER_NAME
Where $SSH_PRIVATE_KEY_TOOLKIT is generated as suggested in the link.
! For just creating a tag, an api call would be way easier using the tags api. as the JOB TOKEN should also normally have enough permissions to do this, this would always be assigned to the executor of the job/pipeline. (untested does not work)
https://docs.gitlab.com/ee/api/tags.html#create-a-new-tag
curl --request POST --header "JOB-TOKEN: $CI_JOB_TOKEN" "$CI_API_V4_URL/projects/$CI_PROJECT_ID/repository/tags?tag_name=<tag>&ref=$CI_COMMIT_SHA"
You can always fallback to create releases with the release API, which also results in an Git Tag. https://docs.gitlab.com/ee/api/releases/index.html#create-a-release
curl --request POST --header "JOB-TOKEN: $CI_JOB_TOKEN"
--data '{ "name": "New release", "ref":"$CI_COMMIT_SHA", tag_name": "<TAG>", "description": "Super nice release"}'
"$CI_API_V4_URL/projects/$CI_PROJECT_ID/releases"
or using the the GitLab CI release directives https://docs.gitlab.com/ee/ci/yaml/index.html#release
release_job:
stage: release
image: registry.gitlab.com/gitlab-org/release-cli:latest
rules:
- if: $TAG # Run this job when a tag is created manually
script:
- echo 'running release_job'
release:
name: 'Release $TAG'
description: 'Created using the release-cli $EXTRA_DESCRIPTION' # $EXTRA_DESCRIPTION must be defined
tag_name: '$TAG' # elsewhere in the pipeline.
ref: '$TAG'

Is it possible to do a git push within a Gitlab-CI without SSH?

We want to know if it's technically possible like in GitHub, to do a git push using https protocol and not ssh and without using directly an username and password in the curl request.
I have seen people that seem to think it is possible, we weren't able to prove it.
Is there any proof or witness out there than can confirm such a feature that allow you to push using a user access token or the gitlab-ci-token within the CI?
I am giving my before_script.sh that can be used within any .gitlab-ci.yml
before_script:
- ./before_script.sh
All you need is to set a protected environment variable called GL_TOKEN or GITLAB_TOKEN within your project.
if [[ -v "GL_TOKEN" || -v "GITLAB_TOKEN" ]]; then
if [[ "${CI_PROJECT_URL}" =~ (([^/]*/){3}) ]]; then
mkdir -p $HOME/.config/git
echo "${BASH_REMATCH[1]/:\/\//://gitlab-ci-token:${GL_TOKEN:-$GITLAB_TOKEN}#}" > $HOME/.config/git/credentials
git config --global credential.helper store
fi
fi
It doesn't require to change the default git strategy and it will work fine with non protected branch using the default gitlab-ci-token.
On a protected branch, you can use the git push command as usual.
We stopped using SSH keys, Vít Kotačka answers helped us understand why it was failing before.
I was not able to push back via https from a Docker executor when I did changes in the repository which was cloned by gitlab-runner. Therefore, I use following workaround:
Clone a repository to some temporary location via https with a user access token.
Do some Git work (like merging, or tagging).
Push changes back.
I have a job in the .gitlab-ci.yml:
tagMaster:
stage: finalize
script: ./tag_master.sh
only:
- master
except:
- tags
and then I have a shell script tag_master.sh with Git commands:
#!/usr/bin/env bash
OPC_VERSION=`gradle -q opcVersion`
CI_PIPELINE_ID=${CI_PIPELINE_ID:-00000}
mkdir /tmp/git-tag
cd /tmp/git-tag
git clone https://deployer-token:$DEPLOYER_TOKEN#my.company.com/my-user/my-repo.git
cd my-repo
git config user.email deployer#my.company.com
git config user.name 'Deployer'
git checkout master
git pull
git tag -a -m "[GitLab Runner] Tag ${OPC_VERSION}-${CI_PIPELINE_ID}" ${OPC_VERSION}-${CI_PIPELINE_ID}
git push --tags
This works well.
Just FYI, personal access tokens have account level access, which is generally too broad. The Deploy Key is better as it only has project-level access and can be given write permissions at the time of creation. You can provide the public SSH key as the deploy key and the private key can come from a CI/CD variable.
Here's basically the job I use for tagging:
release_tagging:
stage: release
image: ubuntu
before_script:
- mkdir -p ~/.ssh
# Settings > Repository > Deploy Keys > "DEPLOY_KEY_PUBLIC" is the public key of the utitlized SSH pair
# Settings > CI/CD > Variables > "DEPLOY_KEY_PRIVATE" is the private key of the utitlized SSH pair, type is 'File' and ends with empty line
- mv "$DEPLOY_KEY_PRIVATE" ~/.ssh/id_rsa
- chmod 600 ~/.ssh/id_rsa
- 'which ssh-agent || (apt-get update -y && apt-get install openssh-client git -y) > /dev/null 2>&1'
- eval "$(ssh-agent -s)"
- ssh-add ~/.ssh/id_rsa > /dev/null 2>&1
- (ssh-keyscan -H $CI_SERVER_HOST >> ~/.ssh/known_hosts) > /dev/null 2>&1
script:
# .gitconfig
- touch ~/.gitconfig
- git config --global user.name $GITLAB_USER_NAME
- git config --global user.email $GITLAB_USER_EMAIL
# fresh clone
- mkdir ~/source && cd $_
- git clone git#$CI_SERVER_HOST:$CI_PROJECT_PATH.git
- cd $CI_PROJECT_NAME
# Version tag
- git tag -a "v$(cat version)" -m "version $(cat version)"
- git push --tags

How to safely login to private docker registry in gitlab?

I know there are secret variables and I tried passing the secret to a bash script.
When used on a bash script that has #!/bin/bash -x the password can be seen in clear text when using the docker login command like this:
docker login "$USERNAME" "$PASSWORD" $CONTAINERREGISTRY
Is there a way to safely login to a container registry in gitlab-ci?
You can use before_script at the beginning of the gitlab-ci.yml file or inside each job if you need several authentifications:
before_script:
- echo "$CI_REGISTRY_PASSWORD" | docker login -u "$CI_REGISTRY_USER" --password-stdin
Where $CI_REGISTRY_USER and CI_REGISTRY_PASSWORD would be secret variables.
And after each script or at the beginning of the whole file:
after_script:
- docker logout
I wrote an answer about using Gitlab CI and Docker to build docker images :
https://stackoverflow.com/a/50684269/8247069
GitLab provides an array of environment variables when running a job. You'll want to become familiar and use them while developing (running test builds and such) so that you won't need to do anything except set the CI/CD variables in GitLab accordingly (like ENV) and Gitlab will provide most of what you'd want. See GitLab Environment variables.
Just a minor tweak on what has been suggested previously (combining the GitLab suggested with this one.)
For more information on where/how to use before_script and after_script, see .gitlab-ci-yml Configuration parameters I tend to put my login command as one of the last in my main before_script (not in the stages) and my logout in a final "after_script".
before_script:
- echo "$CI_REGISTRY_PASSWORD" | docker login "$CI_REGISTRY" -u "$CI_REGISTRY_USER" --password-stdin;
Then futher down your .gitlab-ci.yml...
after_script:
- docker logout;
For my local development, I create a .env file that follows a common convention then the following bash snippet will check if the file exists and import the values into your shell. To make my project secure AND friendly, .env is ignored, but I maintain a .env.sample with safe example values and I DO include that.
if [ -f .env ]; then printf "\n\n::Sourcing .env\n" && set -o allexport; source .env; set +o allexport; fi
Here's a mostly complete example:
image: docker:19.03.9-dind
stages:
- buildAndPublish
variables:
DOCKER_TLS_CERTDIR: "/certs"
DOCKER_DRIVER: overlay2
services:
- docker:19.03.9-dind
before_script:
- printf "::GitLab ${CI_BUILD_STAGE} stage starting for ${CI_PROJECT_URL}\n";
- printf "::JobUrl=${CI_JOB_URL}\n";
- printf "::CommitRef=${CI_COMMIT_REF_NAME}\n";
- printf "::CommitMessage=${CI_COMMIT_MESSAGE}\n\n";
- printf "::PWD=${PWD}\n\n";
- echo "$CI_REGISTRY_PASSWORD" | docker login $CI_REGISTRY -u $CI_REGISTRY_USER --password-stdin;
build-and-publish:
stage: buildAndPublish
script:
- buildImage;
- publishImage;
rules:
- if: '$CI_COMMIT_REF_NAME == "master"' # Run for master, but not otherwise
when: always
- when: never
after_script:
- docker logout registry.gitlab.com;

Resources