Create a GitLab pipeline that personalizes emails using a Jinja2 template and a YAML file containing recipient information. The pipeline should take in two source files, one for the template and one for the recipient data (an array of recipients in a variables file (YAML)), process them with the jinja2docker package, and output a single file with the personalized emails for each recipient. The processed files should be pushed to Git and the output file should be saved as an artifact.
My problem is:
Fetching changes with git depth set to 20...
Reinitialized existing Git repository in /builds/caisin/Emails/.git/
Checking out d384984e as master...
**Removing jinja2docker/**
Skipping Git submodules setup
Executing "step_script" stage of the job script 00:01
Using docker image sha256:5288c9cee5f00fb66c0bb5301e594218c3670ecccd8cbf64d82248fa1e79264c for gitlab.fit.cvut.cz:5000/ict/images/alpine/ci:latest with digest gitlab.fit.cvut.cz:5000/ict/images/alpine/ci#sha256:98530a57e266169dc6bcfc716c340bdd39705fe2ca48b5fbf1c600c88ad4657b ...
**/bin/sh: eval: line 141: jinja2: not found**
$ if [ -n "$SSH_PRIVATE_KEY" ] && [ "${CI_BUILD_STAGE#deploy}" != "$CI_BUILD_STAGE" ]; then # collapsed multi-line command
$ echo "Processing template and variables using Jinja2..."
Processing template and variables using Jinja2...
$ jinja2 $TEMPLATE_FILE $VARIABLES_FILE -o $OUTPUT_FILE
Cleaning up project directory and file based variables 00:01
ERROR: Job failed: exit code 127
variables:
TEMPLATE_FILE: email_template.html
VARIABLES_FILE: recipient_data.yml
OUTPUT_FILE: output.txt
prepare:
stage: prepare
script:
- echo "Preparing files for processing..."
- git clone https://github.com/dinuta/jinja2docker.git
- echo "Jinja2 repository cloned and ready for use."
process:
stage: process
script:
- echo "Processing template and variables using Jinja2..."
- jinja2 $TEMPLATE_FILE $VARIABLES_FILE -o $OUTPUT_FILE
- echo "Processing complete!"
Related
Given the following very simple .gitlab-ci.yml pipeline:
---
variables:
KEYCLOAK_VERSION: 20.0.1 # this should be populated from reading a file from the repo...
stages:
- test
build:
stage: test
script:
- echo "$KEYCLOAK_VERSION"
As you might see, this simply outputs the value of KEYCLOAK_VERSION defined in the variables section.
Now, the Git repository contains a env.properties file with KEYCLOAK_VERSION=20.0.1 as content. How would I read the variable from that file and use it in the GitLab pipeline?
The documentation mentions import but this seems to be using YAML files.
To read variables from a file you can use the source or . command.
script:
- source env.properties
- echo $KEYCLOAK_VERSION
Attention:
One reason why you might not want to do it this way is because whatever is in env.properties will be run in your shell, such as rm -rf /, which could be very dangerous.
Maybe you can take a look here for some other solutions.
I have a post-checkout hook that I'm trying to convert to be usable with pre-commit.
#!/bin/bash
# 0 means 'git checkout somefile' (don't do anything)
# 1 means 'git checkout branchname'
echo "> $*"
(($3)) || exit 0
declare -a blocked
blocked+=('master' 'main' 'examples')
printf -v blocked_rx '%s|' "${blocked[#]}"
blocked_rx="${blocked_rx%?}"
# shellcheck disable=SC2034
read -r prev cur < <(git reflog | awk 'NR==1{ print $6 " " $8; exit }')
[[ $cur =~ $blocked_rx ]] \
&& echo "WARNING: You cannot push $cur branch to remote!"
exit 0
I've created a .pre-commit-hooks.yaml file.
- id: warn-branch-on-checkout
name: Message to stderr if branch
language: script
pass_filenames: false
always_run: true
stages: [post-checkout]
entry: pre-commit-hooks/warn-branch-on-checkout
And my .pre-commit-config.yaml file looks like:
default_install_hook_types:
- pre-commit
- post-checkout
repos:
- repo: https://MyCompany#dev.azure.com/MyCompany/MyProject/_git/myrepo
rev: v0.1.12
hooks:
- id: warn-branch-on-checkout
args: ['examples']
The bash script lives in pre-commit-hooks off the top level of the repository.
As far as I can tell, pre-commit is not calling warn-branch-on-checkout (I added the echo "> $*" in the script).
pre-commit.log in the cache dir is not being created.
What am I doing wrong?
Added examples of run:
$ git checkout examples
Switched to branch 'examples'
Your branch is up to date with 'origin/examples'.
HERE: /home/harleypig/projects/guardrail/.git/hooks
1: /usr/bin/python3 -mpre_commit hook-impl --config=.pre-commit-config.yaml --hook-type=post-checkout --hook-dir /home/harleypig/projects/guardrail/.git/hooks -- 79d1096b98caa40e672a502855cb139d72de2ada 79d1096b98caa40e672a502855cb139d72de2ada 1
Message to stderr if branch..............................................Passed
I added a couple of echo statements to the pre-commit generated hook (the HERE: and 1: lines above).
I don't see > blah blah blah so the script isn't being called at all.
thanks for adding the output
pre-commit hides the output by default unless there is a failure -- this is to keep your output clean and noise free (noisy outputs tend to get ignored)
you can change this to always display output by setting the verbose: true option (or by exiting nonzero -- which doesn't affect post-checkout since it is too late to affect "success"). note that verbose: true is intended mainly as a debugging mechanism so generally adding noise to the output is discouraged
disclaimer: I created pre-commit
I have a simple pipeline with one job to test bash scripts. The pipeline as follow:
image: alpine/git
stages:
- test_branching
test_branch:
stage: test_branching
before_script:
- mkdir -p .common
- wget https://x.x.x.x/branching.sh > .common/test.sh && chmod +x .common/test.sh
- source .common/test.sh
script:
- test_pipe
- echo "app version is ${app_version}"
The bash script as follow:
#!/bin/sh
function test_pipe () {
app_version="1.0.0.0-SNAPSHOT"
}
The problem is that the pipeline for whatever reason does not recognize the function inside the script. The logs are:
...
$ test_pipe
/scripts-1050-417479/step_script: eval: line 180: test_pipe: not found
Does anybody know what happend with this?? I miss a lot Jenkins shared libraries, gitlab does not have it, also gitlab does not have the function to include scripts inside yml files.
I dont want to use multiproject pipeline, I need to do it at this way. This is only an example of a more complicated pipeline logic.
Thanks in advance
As the documentation states before_script is just concatenated together with script and run on a single shell. The script you are downloading does not define test_pipe.
... gitlab does not have the function to include scripts inside yml
files.
It does, just use the YAML multiline literal syntax with |, e.g.:
script:
- |
echo "this"
echo "is"
echo "an \
example"
Is it possible to make a build Pipeline with a file-based trigger?
Let's say I have the following Directory structure.
Microservices/
|_Service A
|_Test_Stage
|_Testing_Config
|_QA_Stage
|_QA_Config
|_Prod_stage
|_Prod_Config
|_Service B
|_Test_Stage
|_Testing_Config
|_QA_Stage
|_QA_Config
|_Prod_stage
|_Prod_Config
I want to have just one single YAML Build Pipeline File.
Based on the Variables $(Project) & $(Stage) different builds are created.
Is it possible to check what directory/file initiated the Trigger and set the variables accordingly?
Additionally it would be great if its possible to use those variables to set the tags to the artifact after the run.
Thanks
KR
Is it possible to check what directory/file initiated the Trigger and
set the variables accordingly?
Of course yes. But there's no direct way since we do not provide any pre-defined variables to store such message, so you need additional complex work around to get that.
#1:
Though there's no variable can direct stores the message like which folder and which file is modified, but you could get it by tracking the commit message Build.SourceVersion via api.
GET https://dev.azure.com/{organization}/{project}/_apis/git/repositories/{repositoryId}/commits/{commitId}/changes?api-version=5.1
From its response body, you can directly know its path and file:
Since the response body is JSON format, you could make use of some JSON function to parsing this path value. See this similar script as a reference.
Then use powershell script to set these value as pipeline variable which the next agent jobs/tasks could use them.
Also, in your scenario, all of these should be finished before all next job started. So, you could consider to create a simple extension with pipeline decorator. Define all above steps in decorator, so that it can be finished in the pre-job of every pipeline.
#2
Think you should feel above method is little complex. So I'd rather suggest you could make use of commit messge. For example, specify project name and file name in commit message, get them by using variable Build.SourceVersionMessage.
Then use the powershell script (I mentioned above) to set them as variable.
This is more convenient than using api to parse commits body.
Hope one of them could help.
Thanks for your reply.
I tried a different approach with a Bash Script.
Because I only use ubuntu Images.
I make "git log" with Filtering for the last commit of the Directory Microservices.
With some awk (not so a satisfying Solution) I get the Project & Stage and write them into Pipeline Variables.
The Pipeline just gets triggered when there is a change to the Microservices/* Path.
trigger:
batch: true
branches:
include:
- master
paths:
include:
- Microservices/*
The first job when the trigger activated, is the Dynamic_variables job.
This Job I only use to set the Variables $(Project) & $(Stage). Also the build tags are set with those Variables, so I'm able do differentiate the Artifacts in the Releases.
jobs:
- job: Dynamic_Variables
pool:
vmImage: 'ubuntu-latest'
steps:
- checkout: self
- task: Bash#3
name: Dynamic_Var
inputs:
filePath: './scripts/multi-usage.sh'
arguments: '$(Build.SourcesDirectory)'
displayName: "Set Dynamic Variables Project"
- task: Bash#3
inputs:
targetType: 'inline'
script: |
set +e
if [ -z $(Dynamic_Var.Dynamic_Project) ]; then
echo "target Project not specified";
exit 1;
fi
echo "Project is:" $(Dynamic_Var.Dynamic_Project)
displayName: 'Verify that the Project parameter has been supplied to pipeline'
- task: Bash#3
inputs:
targetType: 'inline'
script: |
set +e
if [ -z $(Dynamic_Var.Dynamic_Stage) ]; then
echo "target Stage not specified";
exit 1;
fi
echo "Stage is:" $(Dynamic_Var.Dynamic_Stage)
displayName: 'Verify that the Stage parameter has been supplied to pipeline'
The Bash Script I run in this Job looks like this:
#!/usr/bin/env bash
set -euo pipefail
WORKING_DIRECTORY=${1}
cd ${WORKING_DIRECTORY}
CHANGEPATH="$(git log -1 --name-only --pretty='format:' -- Microservices/)"
Project=$(echo $CHANGEPATH | awk -F[/] '{print $2}')
CHANGEFILE=$(echo $CHANGEPATH | awk -F[/] '{print $4}')
Stage=$(echo $CHANGEFILE | awk -F[-] '{print $1}')
echo "##vso[task.setvariable variable=Dynamic_Project;isOutput=true]${Project}"
echo "##vso[task.setvariable variable=Dynamic_Stage;isOutput=true]${Stage}"
echo "##vso[build.addbuildtag]${Project}"
echo "##vso[build.addbuildtag]${Stage}"
If someone has a better solution then the awk commands please let me know.
Thanks a lot.
KR
I'm working on a Java app that uses multiple APIs and would like to keep the API tokens out of the public GitLab repository. The app is packaged and deployed to a remote server and I don't know how to make the tokens available without including them in the GitLab repository otherwise.
Is there a way I can restrict the viewing of a file (or part of it) to sort of "redact" these tokens? Or should I go about it a different way?
Don't put API keys in your repo. Inject them into your use of the repo via environment variables which are maintained by your deployment system. If you deployment system doesn't have that ability, you probably need to change it. It doesn't need to be too complicated - for example change it to deploying your code from git, then copying a .env file into place separately. If your deployment mechanism only lets you use git repos, you could put your env vars into a separate repo that is kept private.
I have a similar situation with injecting google-services.json file into an Android application. Long story short we have multiple environments our app targets, and the production environment file must, somehow, reside in the build pipelines (either, committed or something).
As pointed by the previous response having this information committed in the main repo is not ideal. Developers could accidentally use the production environment while testing for example.
How we solved this
First, documentation. All those files (google-services.json and similar others) are ignored in git and the developer documentation states you must add your own.
Second, the CI build pipelines. We are also using GitLab, and we store those files as base64 encoded strings in CI variables, controlling then access to those variables via the protected tags/branches mechanism GitLab offers.
Serializing the files
There are two steps involved in here. First serialize an actual file in base64. Second, de-serialize the file from base64 into its appropriate location.
base64 --wraps=0 google-services.json (wraps option prevents line wrapping if done in console directly.). Then store the output in a GitLab CI variable.
In the .gitlab-ci.yml file do the inverse to inject the file.
echo $VAR_NAME | base64 -d > where/you/need/the/file.
You then control the appropriate environment to use via the $VAR_NAME variable.
An example of this an be found at https://gitlab.com/snippets/1926611. This case is for an xml file with the Google Maps API key, but the process is identical.
You can create a variable in you GitLab project settings. The variable can be used in your .gitlab-ci.yml file.
For example,
create a variable named GOOGLE_SERVICE_JSON and set the value to the base64 format of the file content. You can get it by command base64 google-services.json
update your .gitlab-ci.yml file, decode the GOOGLE_SERVICE_JSON value to google-services.json file like this
assembleDebug:
stage: build
script:
- echo ${GOOGLE_SERVICE_JSON} | base64 -d > app/google-services.json
- ./gradlew assembleDebug
artifacts:
paths:
- app/build/outputs/
You can also use this method to encode the keystore file to a variant and decode it to a file in pipeline build.
Here is a full example
image: openjdk:8-jdk
variables:
ANDROID_COMPILE_SDK: "28"
ANDROID_BUILD_TOOLS: "28.0.3"
ANDROID_SDK_TOOLS: "6609375_latest"
before_script:
- echo ANDROID_COMPILE_SDK ${ANDROID_COMPILE_SDK}
- echo ANDROID_BUILD_TOOLS ${ANDROID_BUILD_TOOLS}
- echo ANDROID_SDK_TOOLS ${ANDROID_SDK_TOOLS}
- apt-get --quiet update --yes
- apt-get --quiet install --yes wget tar unzip lib32stdc++6 lib32z1
- wget --quiet --output-document=android-sdk.zip https://dl.google.com/android/repository/commandlinetools-linux-${ANDROID_SDK_TOOLS}.zip
- unzip -d android-sdk-linux android-sdk.zip
- export ANDROID_SDK_ROOT=$PWD/android-sdk-linux
- export SDK_MANAGER="${ANDROID_SDK_ROOT}/tools/bin/sdkmanager --sdk_root=${ANDROID_SDK_ROOT}"
- echo y | ${SDK_MANAGER} "platforms;android-${ANDROID_COMPILE_SDK}" >/dev/null
- echo y | ${SDK_MANAGER} "platform-tools" >/dev/null
- echo y | ${SDK_MANAGER} "build-tools;${ANDROID_BUILD_TOOLS}" >/dev/null
- export PATH=$PATH:${ANDROID_SDK_ROOT}/platform-tools/
- chmod +x ./gradlew
# temporarily disable checking for EPIPE error and use yes to accept all licenses
- set +o pipefail
- echo y | ${SDK_MANAGER} --licenses
- set -o pipefail
stages:
- build
assembleDebug:
stage: build
script:
- echo ${GOOGLE_SERVICE_JSON} | base64 -d > app/google-services.json
- echo ${KEY_STORE_PROP} | base64 -d > app/keystore.properties
- echo ${STORE_FILE} | base64 -d > app/keystore.jks
- ./gradlew assembleDebug
artifacts:
paths:
- app/build/outputs/
assembleRelease:
stage: build
script:
- echo ${GOOGLE_SERVICE_JSON} | base64 -d > app/google-services.json
- echo ${KEY_STORE_PROP} | base64 -d > app/keystore.properties
- echo ${STORE_FILE} | base64 -d > app/keystore.jks
- ./gradlew assembleRelease
artifacts:
paths:
- app/build/outputs/