Gitlab-ci always lists the same user as triggerer - gitlab

I've got a project involving multiple GitLab users, all at ownership level. I've got a gitlab-ci.yml file that creates a new tag and pushes the new tag to the repository. This was set up using a deploy key and ssh. The problem is, no matter who actually triggers the job, the same user is always listed as the triggerer, which causes some traceability problems.
Currently, the .yml looks something like this, taken from this link:
before_script:
- echo "$SSH_PRIVATE_KEY_TOOLKIT" | tr -d '\r' | ssh-add - > /dev/null
- mkdir -p ~/.ssh
- chmod 700 ~/.ssh
- ssh-keyscan $GITLAB_URL >> ~/.ssh/known_hosts
- chmod 644 ~/.ssh/known_hosts
- git config --global user.email $GITLAB_USER_EMAIL
- git config --global user.name $GITLAB_USER_NAME
Where $SSH_PRIVATE_KEY_TOOLKIT is generated as suggested in the link.

! For just creating a tag, an api call would be way easier using the tags api. as the JOB TOKEN should also normally have enough permissions to do this, this would always be assigned to the executor of the job/pipeline. (untested does not work)
https://docs.gitlab.com/ee/api/tags.html#create-a-new-tag
curl --request POST --header "JOB-TOKEN: $CI_JOB_TOKEN" "$CI_API_V4_URL/projects/$CI_PROJECT_ID/repository/tags?tag_name=<tag>&ref=$CI_COMMIT_SHA"
You can always fallback to create releases with the release API, which also results in an Git Tag. https://docs.gitlab.com/ee/api/releases/index.html#create-a-release
curl --request POST --header "JOB-TOKEN: $CI_JOB_TOKEN"
--data '{ "name": "New release", "ref":"$CI_COMMIT_SHA", tag_name": "<TAG>", "description": "Super nice release"}'
"$CI_API_V4_URL/projects/$CI_PROJECT_ID/releases"
or using the the GitLab CI release directives https://docs.gitlab.com/ee/ci/yaml/index.html#release
release_job:
stage: release
image: registry.gitlab.com/gitlab-org/release-cli:latest
rules:
- if: $TAG # Run this job when a tag is created manually
script:
- echo 'running release_job'
release:
name: 'Release $TAG'
description: 'Created using the release-cli $EXTRA_DESCRIPTION' # $EXTRA_DESCRIPTION must be defined
tag_name: '$TAG' # elsewhere in the pipeline.
ref: '$TAG'

Related

How to run ansible playbook from github actions - without using external action

I have written a workflow file, that prepares the runner to connect to the desired server with ssh, so that I can run an ansible playbook.
ssh -t -v theUser#theHost shows me that the SSH connection works.
The ansible sript however tells me, that the sudo Password is missing.
If I leave the line ssh -t -v theUser#theHost out, ansible throws a connection timeout and cant connect to the server.
=> fatal: [***]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh: ssh: connect to host *** port 22: Connection timed out
First I don't understand, why ansible can connect to the server only if i execute the command ssh -t -v theUser#theHost.
The next problem is, that the user does not need any sudo Password to have execution rights. The same ansible playbook works very well from my local machine without using the sudo password. I configured the server, so that the user has enough rights in the desired folder recursively.
It simply doesn't work form my GithHub Action.
Can you please tell me what I am doing wrong?
My workflow file looks like this:
name: CI
# Controls when the workflow will run
on:
# Triggers the workflow on push or pull request events but only for the "master" branch
push:
branches: [ "master" ]
# Allows you to run this workflow manually from the Actions tab
workflow_dispatch:
# A workflow run is made up of one or more jobs that can run sequentially or in parallel
jobs:
run-playbooks:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v3
with:
submodules: true
token: ${{secrets.REPO_TOKEN}}
- name: Run Ansible Playbook
run: |
mkdir -p /home/runner/.ssh/
touch /home/runner/.ssh/config
touch /home/runner/.ssh/id_rsa
echo -e "${{secrets.SSH_KEY}}" > /home/runner/.ssh/id_rsa
echo -e "Host ${{secrets.SSH_HOST}}\nIdentityFile /home/runner/.ssh/id_rsa" >> /home/runner/.ssh/config
ssh-keyscan -H ${{secrets.SSH_HOST}} > /home/runner/.ssh/known_hosts
cd myproject-infrastructure/ansible
eval `ssh-agent -s`
chmod 700 /home/runner/.ssh/id_rsa
ansible-playbook -u ${{secrets.ANSIBLE_DEPLOY_USER}} -i hosts.yml setup-prod.yml
Finally found it
First basic setup of the action itself.
name: CI
# Controls when the workflow will run
on:
# Triggers the workflow on push or pull request events but only for the "master" branch
push:
branches: [ "master" ]
# Allows you to run this workflow manually from the Actions tab
workflow_dispatch:
# A workflow run is made up of one or more jobs that can run sequentially or in parallel
jobs:
Next add a job to run and checkout the repository in the first step.
jobs:
run-playbooks:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v3
with:
submodules: true
token: ${{secrets.REPO_TOKEN}}
Next set up ssh correctly.
- name: Setup ssh
shell: bash
run: |
service ssh status
eval `ssh-agent -s`
First of all you want to be sure that the ssh service is running. The ssh service was already running in my case.
However when I experimented with Docker I had to start the service manually at the first place like service ssh start. Next be sure that the .shh folder exists for your user and copy your private key to that folder. I have added a github secret to my repository where I saved my private key. In my case it is the runner user.
mkdir -p /home/runner/.ssh/
touch /home/runner/.ssh/id_rsa
echo -e "${{secrets.SSH_KEY}}" > /home/runner/.ssh/id_rsa
Make sure that your private key is protected. If not the ssh service wont accept working with it. To do so:
chmod 700 /home/runner/.ssh/id_rsa
Normally when you start a ssh connection you are asked if you want to save the host permanently as a known host. As we are running automatically we can't type in yes. If you don't answer the process will fail.
You have to prevent the process being interrupted by the prompt. To do so you add the host to the known_hosts file by yourself. You use ssh-keyscan for that. Unfortunately ssh-keyscan can produce output in differeny formats/types.
Simply using ssh-keyscan was not enough in my case. I had to add other type options to the command. The generated output has to be written to the known_hosts file in the .ssh folder of your user. In my case /home/runner/.ssh/knwon_hosts
So the next command is:
ssh-keyscan -t rsa,dsa,ecdsa,ed25519 ${{secrets.SSH_HOST}} >> /home/runner/.ssh/known_hosts
Now you are almost there. Just call the ansible playbook command to run the ansible script. I ceated a new step where I changed the directory to the folder in my repository where my ansible files are saved.
- name: Run ansible script
shell: bash
run: |
cd infrastructure/ansible
ansible-playbook --private-key /home/runner/.ssh/id_rsa -u ${{secrets.ANSIBLE_DEPLOY_USER}} -i hosts.yml setup-prod.yml
The complete file:
name: CI
# Controls when the workflow will run
on:
# Triggers the workflow on push or pull request events but only for the "master" branch
push:
branches: [ "master" ]
# Allows you to run this workflow manually from the Actions tab
workflow_dispatch:
# A workflow run is made up of one or more jobs that can run sequentially or in parallel
jobs:
run-playbooks:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v3
with:
submodules: true
token: ${{secrets.REPO_TOKEN}}
- name: Setup SSH
shell: bash
run: |
eval `ssh-agent -s`
mkdir -p /home/runner/.ssh/
touch /home/runner/.ssh/id_rsa
echo -e "${{secrets.SSH_KEY}}" > /home/runner/.ssh/id_rsa
chmod 700 /home/runner/.ssh/id_rsa
ssh-keyscan -t rsa,dsa,ecdsa,ed25519 ${{secrets.SSH_HOST}} >> /home/runner/.ssh/known_hosts
- name: Run ansible script
shell: bash
run: |
service ssh status
cd infrastructure/ansible
cat setup-prod.yml
ansible-playbook -vvv --private-key /home/runner/.ssh/id_rsa -u ${{secrets.ANSIBLE_DEPLOY_USER}} -i hosts.yml setup-prod.yml
Next enjoy...
An alternative, without explaining why you have those errors, is to test and use actions/run-ansible-playbook to run your playbook.
That way, you can test if the "sudo Password is missing" is missing in that configuration.
- name: Run playbook
uses: dawidd6/action-ansible-playbook#v2
with:
# Required, playbook filepath
playbook: deploy.yml
# Optional, directory where playbooks live
directory: ./
# Optional, SSH private key
key: ${{secrets.SSH_PRIVATE_KEY}}
# Optional, literal inventory file contents
inventory: |
[all]
example.com
[group1]
example.com
# Optional, SSH known hosts file content
known_hosts: .known_hosts
# Optional, encrypted vault password
vault_password: ${{secrets.VAULT_PASSWORD}}
# Optional, galaxy requirements filepath
requirements: galaxy-requirements.yml
# Optional, additional flags to pass to ansible-playbook
options: |
--inventory .hosts
--limit group1
--extra-vars hello=there
--verbose

how to differentiate when the MR is accepted

I'd like to have a branch in the deployment script of my .gitlab-ci.yml file that is based on whether the particular pipeline is run because a MR has been accepted and is being merged into the default branch.
For instance,
Deploy:
stage: deploy
image: alpine
script:
- apk add openssh-client
- install -d -D -m 700 -p ~/.ssh
- eval $(ssh-agent -s)
- cat "${MY_SSH_PRIVATE_KEY}" | ssh-add -
- if [ "${CI_DEFAULT_BRANCH}" = "${CI_COMMIT_BRANCH}" ]; then \
rsync $BUILD_DIR/myfile.tar.gz filestore1 ; \
else \
rsync $BUILD_DIR/myfile.tar.gz filestore2 ; \
fi
I don't think that the comparison of CI_DEFAULT* with CI_COMMIT* is a safe comparison. I believe this works anytime a commit is on the default branch, not when being merged, so I think I'm missing an important distinction and/or a best-practice.
My intent is that so long as the MR is still in dev, it should be pushed to filestore2, and only pushed to the production filestore1 when the MR is good, tests good/complete, and is accepted.
Related:
Gitlab: How to run a deploy job on a tagged version / release? requires me to tag when I need to release something; this may be something I work into my workflow, but is not our current workflow
The code you have makes sense to fulfill the purpose you described and should work as-is (assuming the bash code is otherwise well-formed).
When changes are merged to the default branch, CI_COMMIT_BRANCH will be the same as CI_DEFAULT_BRANCH and the condition you have would evaluate to true, causing it to deploy to filestore1 -- for pipelines running on any other branch, filestore2 would be used when this job runs.
Your code should be as follows, this way filestore1 will be pushed only when MR is merged to main or master, or else filestore2 will be pushed.
Deploy:
stage: deploy
image: alpine
script:
- apk add openssh-client
- install -d -D -m 700 -p ~/.ssh
- eval $(ssh-agent -s)
- cat "${MY_SSH_PRIVATE_KEY}" | ssh-add -
- if [ "${CI_COMMIT_BRANCH}" -eq "main" ] | [ "${CI_COMMIT_BRANCH}" -eq "master" ]; then \
rsync $BUILD_DIR/myfile.tar.gz filestore1 \
else \
rsync $BUILD_DIR/myfile.tar.gz filestore2 \
fi
The if statement checks for master or main here.

Push Notifications to Rocket.Chat in Gitlab CI

I am trying to setup notifications to my Rocket.Chat server through my .gitlab-ci.yml file. I have my test and deploy stages working, but my notify stage is erroring out. I followed the instructions from here, but I adjusted the notify scripts to work with Rocket.Chat instead of Pushbullet.
Here is my .gitlab-ci.yml:
stages:
- test
- deploy
- notify
test:
stage: test
image: homeassistant/amd64-homeassistant
script:
- hass --script check_config -c .
deploy:
stage: deploy
only:
- master
before_script:
- 'which ssh-agent || ( apk update && apk add openssh-client )'
- eval $(ssh-agent -s)
- echo "$SSH_PRIVATE_KEY" | tr -d '\r' | ssh-add - > /dev/null
- mkdir -p ~/.ssh
- chmod 700 ~/.ssh
- echo "$SSH_KNOWN_HOSTS" > ~/.ssh/known_hosts
- chmod 644 ~/.ssh/known_hosts
script:
- ssh $DEPLOY_USER#$DEPLOY_HOST "cd '$DEPLOY_PATH'; git pull; sudo systemctl restart home-assistant#homeassistant"
notify_success:
stage: notify
allow_failure: true
only:
- master
script:
- curl -X POST -H 'Content-Type: application/json' --data '{"text":"New Hass config deployed successfully!"}' https://chat.bryantgeeks.com/hooks/$ROCKET_CHAT_TOKEN
notify_fail:
stage: notify
allow_failure: true
only:
- master
when: on_failure
script:
- curl -X POST -H 'Content-Type: application/json' --data '{"text":"New Hass config failed. Please check for errors!"}' https://chat.bryantgeeks.com/hooks/$ROCKET_CHAT_TOKEN
I get this error in the CI Lint:
Status: syntax is incorrect
Error: jobs:notify_success:script config should be a string or an array of strings
If I change the notify script lines to have single quotes around it ('), I get the following error in CI Lint:
Status: syntax is incorrect
Error: (): did not find expected key while parsing a block mapping at line 33 column 7
If I try double quotes around the script line ("), I get the following error:
Status: syntax is incorrect
Error: (): did not find expected '-' indicator while parsing a block collection at line 33 column 5
I'm not sure what else to try or where to look at this point for how to correct this. Any help is appreciated.
YAML really doesn't like :'s in strings. The culprit is the : in 'Content-Type: application/json'
Sometimes using the multiline string format helps, like this:
notify_success:
stage: notify
allow_failure: true
only:
- master
script: |
curl -X POST -H 'Content-Type: application/json' --data '{"text":"New Hass config deployed successfully!"}' https://chat.bryantgeeks.com/hooks/$ROCKET_CHAT_TOKEN

Is it possible to do a git push within a Gitlab-CI without SSH?

We want to know if it's technically possible like in GitHub, to do a git push using https protocol and not ssh and without using directly an username and password in the curl request.
I have seen people that seem to think it is possible, we weren't able to prove it.
Is there any proof or witness out there than can confirm such a feature that allow you to push using a user access token or the gitlab-ci-token within the CI?
I am giving my before_script.sh that can be used within any .gitlab-ci.yml
before_script:
- ./before_script.sh
All you need is to set a protected environment variable called GL_TOKEN or GITLAB_TOKEN within your project.
if [[ -v "GL_TOKEN" || -v "GITLAB_TOKEN" ]]; then
if [[ "${CI_PROJECT_URL}" =~ (([^/]*/){3}) ]]; then
mkdir -p $HOME/.config/git
echo "${BASH_REMATCH[1]/:\/\//://gitlab-ci-token:${GL_TOKEN:-$GITLAB_TOKEN}#}" > $HOME/.config/git/credentials
git config --global credential.helper store
fi
fi
It doesn't require to change the default git strategy and it will work fine with non protected branch using the default gitlab-ci-token.
On a protected branch, you can use the git push command as usual.
We stopped using SSH keys, Vít Kotačka answers helped us understand why it was failing before.
I was not able to push back via https from a Docker executor when I did changes in the repository which was cloned by gitlab-runner. Therefore, I use following workaround:
Clone a repository to some temporary location via https with a user access token.
Do some Git work (like merging, or tagging).
Push changes back.
I have a job in the .gitlab-ci.yml:
tagMaster:
stage: finalize
script: ./tag_master.sh
only:
- master
except:
- tags
and then I have a shell script tag_master.sh with Git commands:
#!/usr/bin/env bash
OPC_VERSION=`gradle -q opcVersion`
CI_PIPELINE_ID=${CI_PIPELINE_ID:-00000}
mkdir /tmp/git-tag
cd /tmp/git-tag
git clone https://deployer-token:$DEPLOYER_TOKEN#my.company.com/my-user/my-repo.git
cd my-repo
git config user.email deployer#my.company.com
git config user.name 'Deployer'
git checkout master
git pull
git tag -a -m "[GitLab Runner] Tag ${OPC_VERSION}-${CI_PIPELINE_ID}" ${OPC_VERSION}-${CI_PIPELINE_ID}
git push --tags
This works well.
Just FYI, personal access tokens have account level access, which is generally too broad. The Deploy Key is better as it only has project-level access and can be given write permissions at the time of creation. You can provide the public SSH key as the deploy key and the private key can come from a CI/CD variable.
Here's basically the job I use for tagging:
release_tagging:
stage: release
image: ubuntu
before_script:
- mkdir -p ~/.ssh
# Settings > Repository > Deploy Keys > "DEPLOY_KEY_PUBLIC" is the public key of the utitlized SSH pair
# Settings > CI/CD > Variables > "DEPLOY_KEY_PRIVATE" is the private key of the utitlized SSH pair, type is 'File' and ends with empty line
- mv "$DEPLOY_KEY_PRIVATE" ~/.ssh/id_rsa
- chmod 600 ~/.ssh/id_rsa
- 'which ssh-agent || (apt-get update -y && apt-get install openssh-client git -y) > /dev/null 2>&1'
- eval "$(ssh-agent -s)"
- ssh-add ~/.ssh/id_rsa > /dev/null 2>&1
- (ssh-keyscan -H $CI_SERVER_HOST >> ~/.ssh/known_hosts) > /dev/null 2>&1
script:
# .gitconfig
- touch ~/.gitconfig
- git config --global user.name $GITLAB_USER_NAME
- git config --global user.email $GITLAB_USER_EMAIL
# fresh clone
- mkdir ~/source && cd $_
- git clone git#$CI_SERVER_HOST:$CI_PROJECT_PATH.git
- cd $CI_PROJECT_NAME
# Version tag
- git tag -a "v$(cat version)" -m "version $(cat version)"
- git push --tags

How to safely login to private docker registry in gitlab?

I know there are secret variables and I tried passing the secret to a bash script.
When used on a bash script that has #!/bin/bash -x the password can be seen in clear text when using the docker login command like this:
docker login "$USERNAME" "$PASSWORD" $CONTAINERREGISTRY
Is there a way to safely login to a container registry in gitlab-ci?
You can use before_script at the beginning of the gitlab-ci.yml file or inside each job if you need several authentifications:
before_script:
- echo "$CI_REGISTRY_PASSWORD" | docker login -u "$CI_REGISTRY_USER" --password-stdin
Where $CI_REGISTRY_USER and CI_REGISTRY_PASSWORD would be secret variables.
And after each script or at the beginning of the whole file:
after_script:
- docker logout
I wrote an answer about using Gitlab CI and Docker to build docker images :
https://stackoverflow.com/a/50684269/8247069
GitLab provides an array of environment variables when running a job. You'll want to become familiar and use them while developing (running test builds and such) so that you won't need to do anything except set the CI/CD variables in GitLab accordingly (like ENV) and Gitlab will provide most of what you'd want. See GitLab Environment variables.
Just a minor tweak on what has been suggested previously (combining the GitLab suggested with this one.)
For more information on where/how to use before_script and after_script, see .gitlab-ci-yml Configuration parameters I tend to put my login command as one of the last in my main before_script (not in the stages) and my logout in a final "after_script".
before_script:
- echo "$CI_REGISTRY_PASSWORD" | docker login "$CI_REGISTRY" -u "$CI_REGISTRY_USER" --password-stdin;
Then futher down your .gitlab-ci.yml...
after_script:
- docker logout;
For my local development, I create a .env file that follows a common convention then the following bash snippet will check if the file exists and import the values into your shell. To make my project secure AND friendly, .env is ignored, but I maintain a .env.sample with safe example values and I DO include that.
if [ -f .env ]; then printf "\n\n::Sourcing .env\n" && set -o allexport; source .env; set +o allexport; fi
Here's a mostly complete example:
image: docker:19.03.9-dind
stages:
- buildAndPublish
variables:
DOCKER_TLS_CERTDIR: "/certs"
DOCKER_DRIVER: overlay2
services:
- docker:19.03.9-dind
before_script:
- printf "::GitLab ${CI_BUILD_STAGE} stage starting for ${CI_PROJECT_URL}\n";
- printf "::JobUrl=${CI_JOB_URL}\n";
- printf "::CommitRef=${CI_COMMIT_REF_NAME}\n";
- printf "::CommitMessage=${CI_COMMIT_MESSAGE}\n\n";
- printf "::PWD=${PWD}\n\n";
- echo "$CI_REGISTRY_PASSWORD" | docker login $CI_REGISTRY -u $CI_REGISTRY_USER --password-stdin;
build-and-publish:
stage: buildAndPublish
script:
- buildImage;
- publishImage;
rules:
- if: '$CI_COMMIT_REF_NAME == "master"' # Run for master, but not otherwise
when: always
- when: never
after_script:
- docker logout registry.gitlab.com;

Resources