how to dump gitlab ci environment variables to file - gitlab

the question
How to dump all Gitlab CI environment variables (with variables set in the project or group CI/CD settings) to a file, but only them, without environment variables of the host on which gitlab runner is executed?
Background
We are using gitlab CI/CD to deploy our projects to a docker server. Each project contains a docker-compose.yml file which uses various environment variables, eg db passwords. We are using .env file to store this variables, so one can start/restart the containers after deployment from command line, without accessing gitlab.
Our deployments script looks something like this:
deploy:
script:
#...
- cp docker-compose.development.yml {$DEPLOY_TO_PATH}/docker-compose.yml
- env > variables.env
- docker-compose up -d
#...
And the docker-compose.yml file looks like this:
version: "3"
services:
project:
image: some/image
env_file:
- variables.env
...
The problem is now the .env file contains both gitlab variables and hosts system environment variables and in the result the PATH variable is overwritten.
I have developed a workaround with grep:
env | grep -Pv "^PATH" > variables.env
It allowed us to keep this working for now, but I think that the problem might hit us again with another variables which would be set to different values inside a container and on the host system.
I know I can list all the variables in docker-compose and similar files, but we already have quite a few of them in a few projects so it is not a solution.

You need to add to script next command
script:
...
# Read certificate stored in $KUBE_CA_PEM variable and save it in a new file
- echo "$KUBE_CA_PEM" > variables.env
...

This might be late, but I did something like this:
script:
- env |grep -v "CI"|grep -v "FF"|grep -v "GITLAB"|grep -v "PWD"|grep -v "PATH"|grep -v "HOME"|grep -v "HOST"|grep -v "SH" > application.properties
- cat application.properties
It's not the best, but it works.
The one problem with this is you can have variables with a string containing one of the exclusions, ie. "CI","FF","GITLAB","PWD","PATH","HOME","HOME","SH"

My reusable solution /tools/gitlab/script-gitlab-variables.yml:
variables:
# Default values
GITLAB_EXPORT_ENV_FILENAME: '.env.gitlab.cicd'
.script-gitlab-variables:
debug:
# section_start
- echo -e "\e[0Ksection_start:`date +%s`:gitlab_variables_debug[collapsed=true]\r\e[0K[GITLAB VARIABLES DEBUG]"
# command
- env
# section_end
- echo -e "\e[0Ksection_end:`date +%s`:gitlab_variables_debug\r\e[0K"
export-to-env:
# section_start
- echo -e "\e[0Ksection_start:`date +%s`:gitlab_variables_export_to_env[collapsed=true]\r\e[0K[GITLAB VARIABLES EXPORT]"
# verify mandatory variables
- test ! -z "$GITLAB_EXPORT_VARS" && echo "$GITLAB_EXPORT_VARS" || exit $?
# display variables
- echo "$GITLAB_EXPORT_ENV_FILENAME"
# command
- env | grep -E "^($GITLAB_EXPORT_VARS)=" > $GITLAB_EXPORT_ENV_FILENAME
# section_end
- echo -e "\e[0Ksection_end:`date +%s`:gitlab_variables_export_to_env\r\e[0K"
cat-env:
# section_start
- echo -e "\e[0Ksection_start:`date +%s`:gitlab_variables_cat-env[collapsed=true]\r\e[0K[GITLAB VARIABLES CAT ENV]"
# command
- cat $GITLAB_EXPORT_ENV_FILENAME
# section_end
- echo -e "\e[0Ksection_end:`date +%s`:gitlab_variables_cat-env\r\e[0K"
How to use .gitlab-ci.yml:
include:
- local: '/tools/gitlab/script-gitlab-variables.yml'
Your Job:
variables:
GITLAB_EXPORT_VARS: 'CI_BUILD_NAME|GITLAB_USER_NAME'
script:
- !reference [.script-gitlab-variables, debug]
- !reference [.script-gitlab-variables, export-to-env]
- !reference [.script-gitlab-variables, cat-env]
Result cat .env.gitlab.cicd:
CI_BUILD_NAME=Demo
GITLAB_USER_NAME=Benjamin
What you need dump all:
# /tools/gitlab/script-gitlab-variables.yml
dump-all:
- env > $GITLAB_EXPORT_ENV_FILENAME
# .gitlab-ci.yml
script:
- !reference [.script-gitlab-variables, dump-all]
I hope I could help

Related

GitHub Actions to SSH into a linux machine created by terraform and run remote commands on it

I am creating a Azure Linux VM using terraform through GitHub Actions. Once the VM gets created, I am using the outputs.tf file to get the Keys, FQDN, IP Address and user name, storing it in environment variables. Then i am trying to use these variables to SSH into the server in order to run remote commands on it. Here is my code
name: 'Terraform'
on:
push:
branches:
- "development"
paths:
- 'Infrastructure/**'
pull_request:
permissions:
contents: read
jobs:
terraform:
name: 'Terraform'
runs-on: ubuntu-latest
defaults:
run:
shell: bash
env:
ARM_CLIENT_ID: ${{ secrets.ARM_CLIENT_ID }}
ARM_CLIENT_SECRET: ${{ secrets.ARM_CLIENT_SECRET }}
ARM_TENANT_ID: ${{ secrets.ARM_TENANT_ID }}
ARM_SUBSCRIPTION_ID: ${{ secrets.ARM_SUBSCRIPTION_ID }}
ARM_ACCESS_KEY: ${{ secrets.ARM_ACCESS_KEY }}
steps:
# Checkout the repository to the GitHub Actions runner
- name: Checkout
uses: actions/checkout#v3
with:
repository: 'myrepo/ModernDelivery'
ref: 'development'
# Initialize a new or existing Terraform working directory by creating initial files, loading any remote state, downloading modules, etc.
- name: Terraform Create Infrastructure
working-directory: ./Infrastructure
run: |
terraform init
terraform validate
terraform plan -out "infra.tfplan"
terraform apply "infra.tfplan"
echo "SSH_USER=$(terraform output -raw linuxsrvusername | sed 's/\s*=\s*/=/g' | xargs)" >> $GITHUB_ENV
echo "SSH_KEY=$(terraform output -raw tls_public_key | sed 's/\s*=\s*/=/g' | xargs)" >> $GITHUB_ENV
echo "SSH_HOST=$(terraform output -raw linuxsrvpublicip | sed 's/\s*=\s*/=/g' | xargs)" >> $GITHUB_ENV
echo "SSH_FQDN=$(terraform output -raw linuxsrvfqdn | sed 's/\s*=\s*/=/g' | xargs)" >> $GITHUB_ENV
echo $SSH_USER
echo $SSH_KEY
echo $SSH_HOST
echo $SSH_FQDN
- name: Configure SSH and login
shell: bash
env:
SSH_USER: ${{ env.SSH_USER }}
SSH_KEY: ${{ env.SSH_KEY }}
SSH_HOST: ${{ env.SSH_HOST }}
SSH_FQDN: ${{ env.SSH_FQDN }}
run: |
sudo -i
cd /home/runner
sudo hostname $SSH_HOST
mkdir -p /home/runner/ssh
mv ssh .ssh
echo "$SSH_KEY" > /home/runner/.ssh/authorized_keys
chmod 0600 /home/runner/.ssh/authorized_keys
cat >>/home/runner/.ssh/config <<END
Host chefssh
HostName $SSH_HOST
User $SSH_USER
IdentityFile /home/runner/.ssh/authorized_keys
PubKeyAuthentication yes
StrictHostKeyChecking no
END
ssh chefssh -t sudo -- "sh -c 'sudo apt-get update && sudo apt-get upgrade -y'"
I am getting the below error when Github actions run
Run sudo -i
Pseudo-terminal will not be allocated because stdin is not a terminal.
Warning: Permanently added '111.222.333.444' (ECDSA) to the list of known hosts.
Load key "/home/runner/.ssh/authorized_keys": invalid format
pha_xDuW3lc#111.222.333.444: Permission denied (publickey).
Error: Process completed with exit code 255.
This seems to tell me that the key passed in Authorized Keys is not valid. Which brings me to the question, which key is required. With terraform i have 4 keys which can be generated
private_key_openssh - this is a Private Key data in OpenSSH PEM format
private_key_pem - This is Private Key data in PEM(RFC 1421) format
public_key_openssh - The public key data in "Authorized Keys" format.
public_key_pem - This is Public Key data in PEM(RFC 1421) format
which among the 4 needs to be in authorized_keys. Also are any other keys need to be added under .ssh folder?
A ssh testhost -tt would use the /home/current-user/.ssh/config file, which differs from sudo cat >>~/.ssh/config.
If your sudo commands are done with the user root, the config file modifed would be /home/root/.ssh/config.
That would explain your error message, where the right config file is not found, and the entry Host testhost is not found.
Instead of using ~, try and use the full path, or at least echo $HOME to make sure you are using the expected path.

In Gitlab where is the Dockerfile located when its configured to generate dynamically?

I have an enhancement to use Kaniko for Docker builds on Gitlab but the pipeline is failing to locate the dynamically generated Dockerfile with error :
$ echo "Docker build"
Docker build
$ cd ./src
$ pwd
/builds/group/subgroup/labs/src
$ cp /builds/group/subgroup/labs/src/Dockerfile /builds/group/subgroup/labs
cp: can't stat '/builds/group/subgroup/labs/src/Dockerfile': No such file or directory
Cleaning up project directory and file based variables
00:00
ERROR: Job failed: command terminated with exit code 1
For context the pipeline was designed to generate a Dockerfile dynamically for any particular project :
ci-scripts.yml
.create_dockerfile:
script: |
echo "checking dockerfile existence"
if ! [ -e Dockerfile ]; then
echo "dockerfile doesn't exist. Trying to create a new dockerfile from csproj."
docker_entrypoint=$(grep -m 1 AssemblyName ./src/*.csproj | sed -r 's/\s*<[^>]*>//g' | sed -r 's/\r$//g').dll
cat > Dockerfile << EOF
FROM mcr.microsoft.com/dotnet/aspnet:6.0 AS base
WORKDIR /app
COPY ./publish .
ENTRYPOINT dotnet $docker_entrypoint
EOF
echo "dockerfile created"
else
echo "dockerfile exists"
fi
And in the main pipeline all that was needed was to reference .ci-scripts.yml as appropriate and do docker push.
After switching to Kaniko for Docker builds,Kaniko itself expects a Dockerfile at the location ${CI_PROJECT_DIR}/Dockerfile. In my context this is the path /builds/group/subgroup/labs.
And the main pipeline looks like this :
build-push.yml
docker_build_dev:
tags:
- aaa
image:
name: gcr.io/kaniko-project/executor:v1.6.0-debug
entrypoint: [""]
only:
- develop
stage: docker
before_script:
- echo "Docker build"
- pwd
- cd ./src
- pwd
extends: .create_dockerfile
variables:
DEV_TAG: dev-latest
script:
- cp /builds/group/subgroup/labs/src/Dockerfile /builds/group/subgroup/labs
- mkdir -p /kaniko/.docker
- echo "{\"auths\":{\"${CI_REGISTRY}\":{\"auth\":\"$(printf "%s:%s" "${CI_REGISTRY_USER}" "${CI_REGISTRY_PASSWORD}" | base64 | tr -d '\n')\"}}}" > /kaniko/.docker/config.json
- >-
/kaniko/executor
--context "${CI_PROJECT_DIR}"
--dockerfile "${CI_PROJECT_DIR}/Dockerfile"
--destination "${CI_REGISTRY_IMAGE}:${DEV_TAG}"
In the block below I maintained the dynamically generated Dockerfile at the same path (./src) by switching from default Docker build directory (/builds/group/subgroup/labs) to (/builds/group/subgroup/labs/src). The assumption is that even with dynamic generation the Dockerfile should still be maintained at (./src)
Expected
The dynamically generated Dockerfile should be available at the default Docker build path /builds/group/subgroup/labs after the script ci-script.yml finishes executing.
When I maintain a Dockerfile at the project root (at /src ) (without Kaniko usage) the Docker-build runs successfully but once I switch to dynamically generating the Dockerfile (with Kaniko usage) the pipeline cannot find the Dockerfile. When the Dockerfile is maintained at project root this way as opposed to dynamic generation I have to copy the file to the Kaniko load path via :
script:
- cp ./src/Dockerfile /builds/group/subgroup/labs/Dockerfile
- mkdir -p /kaniko/.docker
I have a blank on how ci-script.yml is working (it was done by someone no longer around). I have tried to pwd in the script itself so as to check which directory its executing from :
.create_dockerfile:
script: |
pwd
echo "checking dockerfile existence"
....
....
but I get an error :
$ - pwd # collapsed multi-line command
/scripts-1175-34808/step_script: eval: line 123: -: not found
My questions :
Where exactly does Gitlab store Dockerfiles that are being generated on the fly?.
Is the generated Dockerfile treated as an artifact and if so at which path will it be?

Gitlab CI: Passing dynamic variables

I am looking to pass varibales value dynamically as shown below to terraform image as mentioned in the link
image:
name: hashicorp/terraform:light
entrypoint:
- /usr/bin/env
- 'PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin'
- 'ACCESS_KEY_ID=${ENV}_AWS_ACCESS_KEY_ID'
- 'SECRET_ACCESS_KEY=${ENV}_AWS_SECRET_ACCESS_KEY'
- 'DEFAULT_REGION=${ENV}_AWS_DEFAULT_REGION'
- 'export AWS_ACCESS_KEY_ID=${!ACCESS_KEY_ID}'
- 'export AWS_SECRET_ACCESS_KEY=${!SECRET_ACCESS_KEY}'
- 'export AWS_DEFAULT_REGION=${!DEFAULT_REGION}'
However, I am getting empty values. How can I pass dynamic values to the variables.
The confusion arises from the subtle fact, that the gitlab runner executes the commands passed into the script section using sh rather than bash
And the core issue is encountered that the following syntax
'export AWS_ACCESS_KEY_ID=${!ACCESS_KEY_ID}'
is understood correctly only by bash and not by sh
Therefore, we need to workaround it by using syntax that is understood by sh
For your case, something like the following should do it
image:
name: hashicorp/terraform:light
entrypoint:
- /usr/bin/env
- 'PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin'
job:
before_script:
- ACCESS_KEY_ID=${ENV}_AWS_ACCESS_KEY_ID
- export AWS_ACCESS_KEY_ID=$(eval echo \$$ACCESS_KEY_ID )
- SECRET_ACCESS_KEY=${ENV}_AWS_SECRET_ACCESS_KEY
- export AWS_SECRET_ACCESS_KEY=$( eval echo \$$SECRET_ACCESS_KEY )
- DEFAULT_REGION=${ENV}_AWS_DEFAULT_REGION
- export AWS_DEFAULT_REGION=$( eval echo \$$DEFAULT_REGION )
script:
- echo $AWS_ACCESS_KEY_ID
- echo $AWS_SECRET_ACCESS_KEY
- echo $AWS_DEFAULT_REGION

GITLAB CI: File shell can not get environment variables

I have file shell scripe deploy.sh:
#!/bin/sh
echo "CI_COMMIT_SHORT_SHA=$CI_COMMIT_SHORT_SHA" >> .env
exit
I create a file gitlab-ci.yml with script:
...
script:
- ...
- ssh -T -i "xxx.pem" -o "StrictHostKeyChecking=no" $EC2_ADDRESS 'bash -s' < deploy.sh
I connect to EC2 and check file .env result:
CI_COMMIT_SHORT_SHA=
deploy.sh file can not get value of variable CI_COMMIT_SHORT_SHA.
I want to result:
CI_COMMIT_SHORT_SHA=xxxx
How can I do that? Please help me!
The script appears to be executed on another server using ssh. Because of that the environment variables are not present.
You may pass the env variables directly using the ssh command:
ssh <machine> CI_COMMIT_SHORT_SHA=$CI_COMMIT_SHORT_SHA <command>

How to safely login to private docker registry in gitlab?

I know there are secret variables and I tried passing the secret to a bash script.
When used on a bash script that has #!/bin/bash -x the password can be seen in clear text when using the docker login command like this:
docker login "$USERNAME" "$PASSWORD" $CONTAINERREGISTRY
Is there a way to safely login to a container registry in gitlab-ci?
You can use before_script at the beginning of the gitlab-ci.yml file or inside each job if you need several authentifications:
before_script:
- echo "$CI_REGISTRY_PASSWORD" | docker login -u "$CI_REGISTRY_USER" --password-stdin
Where $CI_REGISTRY_USER and CI_REGISTRY_PASSWORD would be secret variables.
And after each script or at the beginning of the whole file:
after_script:
- docker logout
I wrote an answer about using Gitlab CI and Docker to build docker images :
https://stackoverflow.com/a/50684269/8247069
GitLab provides an array of environment variables when running a job. You'll want to become familiar and use them while developing (running test builds and such) so that you won't need to do anything except set the CI/CD variables in GitLab accordingly (like ENV) and Gitlab will provide most of what you'd want. See GitLab Environment variables.
Just a minor tweak on what has been suggested previously (combining the GitLab suggested with this one.)
For more information on where/how to use before_script and after_script, see .gitlab-ci-yml Configuration parameters I tend to put my login command as one of the last in my main before_script (not in the stages) and my logout in a final "after_script".
before_script:
- echo "$CI_REGISTRY_PASSWORD" | docker login "$CI_REGISTRY" -u "$CI_REGISTRY_USER" --password-stdin;
Then futher down your .gitlab-ci.yml...
after_script:
- docker logout;
For my local development, I create a .env file that follows a common convention then the following bash snippet will check if the file exists and import the values into your shell. To make my project secure AND friendly, .env is ignored, but I maintain a .env.sample with safe example values and I DO include that.
if [ -f .env ]; then printf "\n\n::Sourcing .env\n" && set -o allexport; source .env; set +o allexport; fi
Here's a mostly complete example:
image: docker:19.03.9-dind
stages:
- buildAndPublish
variables:
DOCKER_TLS_CERTDIR: "/certs"
DOCKER_DRIVER: overlay2
services:
- docker:19.03.9-dind
before_script:
- printf "::GitLab ${CI_BUILD_STAGE} stage starting for ${CI_PROJECT_URL}\n";
- printf "::JobUrl=${CI_JOB_URL}\n";
- printf "::CommitRef=${CI_COMMIT_REF_NAME}\n";
- printf "::CommitMessage=${CI_COMMIT_MESSAGE}\n\n";
- printf "::PWD=${PWD}\n\n";
- echo "$CI_REGISTRY_PASSWORD" | docker login $CI_REGISTRY -u $CI_REGISTRY_USER --password-stdin;
build-and-publish:
stage: buildAndPublish
script:
- buildImage;
- publishImage;
rules:
- if: '$CI_COMMIT_REF_NAME == "master"' # Run for master, but not otherwise
when: always
- when: never
after_script:
- docker logout registry.gitlab.com;

Resources