How can I keep a file private on a public GitLab repository? - gitlab

I'm working on a Java app that uses multiple APIs and would like to keep the API tokens out of the public GitLab repository. The app is packaged and deployed to a remote server and I don't know how to make the tokens available without including them in the GitLab repository otherwise.
Is there a way I can restrict the viewing of a file (or part of it) to sort of "redact" these tokens? Or should I go about it a different way?

Don't put API keys in your repo. Inject them into your use of the repo via environment variables which are maintained by your deployment system. If you deployment system doesn't have that ability, you probably need to change it. It doesn't need to be too complicated - for example change it to deploying your code from git, then copying a .env file into place separately. If your deployment mechanism only lets you use git repos, you could put your env vars into a separate repo that is kept private.

I have a similar situation with injecting google-services.json file into an Android application. Long story short we have multiple environments our app targets, and the production environment file must, somehow, reside in the build pipelines (either, committed or something).
As pointed by the previous response having this information committed in the main repo is not ideal. Developers could accidentally use the production environment while testing for example.
How we solved this
First, documentation. All those files (google-services.json and similar others) are ignored in git and the developer documentation states you must add your own.
Second, the CI build pipelines. We are also using GitLab, and we store those files as base64 encoded strings in CI variables, controlling then access to those variables via the protected tags/branches mechanism GitLab offers.
Serializing the files
There are two steps involved in here. First serialize an actual file in base64. Second, de-serialize the file from base64 into its appropriate location.
base64 --wraps=0 google-services.json (wraps option prevents line wrapping if done in console directly.). Then store the output in a GitLab CI variable.
In the .gitlab-ci.yml file do the inverse to inject the file.
echo $VAR_NAME | base64 -d > where/you/need/the/file.
You then control the appropriate environment to use via the $VAR_NAME variable.
An example of this an be found at https://gitlab.com/snippets/1926611. This case is for an xml file with the Google Maps API key, but the process is identical.

You can create a variable in you GitLab project settings. The variable can be used in your .gitlab-ci.yml file.
For example,
create a variable named GOOGLE_SERVICE_JSON and set the value to the base64 format of the file content. You can get it by command base64 google-services.json
update your .gitlab-ci.yml file, decode the GOOGLE_SERVICE_JSON value to google-services.json file like this
assembleDebug:
stage: build
script:
- echo ${GOOGLE_SERVICE_JSON} | base64 -d > app/google-services.json
- ./gradlew assembleDebug
artifacts:
paths:
- app/build/outputs/
You can also use this method to encode the keystore file to a variant and decode it to a file in pipeline build.
Here is a full example
image: openjdk:8-jdk
variables:
ANDROID_COMPILE_SDK: "28"
ANDROID_BUILD_TOOLS: "28.0.3"
ANDROID_SDK_TOOLS: "6609375_latest"
before_script:
- echo ANDROID_COMPILE_SDK ${ANDROID_COMPILE_SDK}
- echo ANDROID_BUILD_TOOLS ${ANDROID_BUILD_TOOLS}
- echo ANDROID_SDK_TOOLS ${ANDROID_SDK_TOOLS}
- apt-get --quiet update --yes
- apt-get --quiet install --yes wget tar unzip lib32stdc++6 lib32z1
- wget --quiet --output-document=android-sdk.zip https://dl.google.com/android/repository/commandlinetools-linux-${ANDROID_SDK_TOOLS}.zip
- unzip -d android-sdk-linux android-sdk.zip
- export ANDROID_SDK_ROOT=$PWD/android-sdk-linux
- export SDK_MANAGER="${ANDROID_SDK_ROOT}/tools/bin/sdkmanager --sdk_root=${ANDROID_SDK_ROOT}"
- echo y | ${SDK_MANAGER} "platforms;android-${ANDROID_COMPILE_SDK}" >/dev/null
- echo y | ${SDK_MANAGER} "platform-tools" >/dev/null
- echo y | ${SDK_MANAGER} "build-tools;${ANDROID_BUILD_TOOLS}" >/dev/null
- export PATH=$PATH:${ANDROID_SDK_ROOT}/platform-tools/
- chmod +x ./gradlew
# temporarily disable checking for EPIPE error and use yes to accept all licenses
- set +o pipefail
- echo y | ${SDK_MANAGER} --licenses
- set -o pipefail
stages:
- build
assembleDebug:
stage: build
script:
- echo ${GOOGLE_SERVICE_JSON} | base64 -d > app/google-services.json
- echo ${KEY_STORE_PROP} | base64 -d > app/keystore.properties
- echo ${STORE_FILE} | base64 -d > app/keystore.jks
- ./gradlew assembleDebug
artifacts:
paths:
- app/build/outputs/
assembleRelease:
stage: build
script:
- echo ${GOOGLE_SERVICE_JSON} | base64 -d > app/google-services.json
- echo ${KEY_STORE_PROP} | base64 -d > app/keystore.properties
- echo ${STORE_FILE} | base64 -d > app/keystore.jks
- ./gradlew assembleRelease
artifacts:
paths:
- app/build/outputs/

Related

Read variable from file for usage in GitLab pipeline

Given the following very simple .gitlab-ci.yml pipeline:
---
variables:
KEYCLOAK_VERSION: 20.0.1 # this should be populated from reading a file from the repo...
stages:
- test
build:
stage: test
script:
- echo "$KEYCLOAK_VERSION"
As you might see, this simply outputs the value of KEYCLOAK_VERSION defined in the variables section.
Now, the Git repository contains a env.properties file with KEYCLOAK_VERSION=20.0.1 as content. How would I read the variable from that file and use it in the GitLab pipeline?
The documentation mentions import but this seems to be using YAML files.
To read variables from a file you can use the source or . command.
script:
- source env.properties
- echo $KEYCLOAK_VERSION
Attention:
One reason why you might not want to do it this way is because whatever is in env.properties will be run in your shell, such as rm -rf /, which could be very dangerous.
Maybe you can take a look here for some other solutions.

Gitlab pipeline error with source sh script

I have a simple pipeline with one job to test bash scripts. The pipeline as follow:
image: alpine/git
stages:
- test_branching
test_branch:
stage: test_branching
before_script:
- mkdir -p .common
- wget https://x.x.x.x/branching.sh > .common/test.sh && chmod +x .common/test.sh
- source .common/test.sh
script:
- test_pipe
- echo "app version is ${app_version}"
The bash script as follow:
#!/bin/sh
function test_pipe () {
app_version="1.0.0.0-SNAPSHOT"
}
The problem is that the pipeline for whatever reason does not recognize the function inside the script. The logs are:
...
$ test_pipe
/scripts-1050-417479/step_script: eval: line 180: test_pipe: not found
Does anybody know what happend with this?? I miss a lot Jenkins shared libraries, gitlab does not have it, also gitlab does not have the function to include scripts inside yml files.
I dont want to use multiproject pipeline, I need to do it at this way. This is only an example of a more complicated pipeline logic.
Thanks in advance
As the documentation states before_script is just concatenated together with script and run on a single shell. The script you are downloading does not define test_pipe.
... gitlab does not have the function to include scripts inside yml
files.
It does, just use the YAML multiline literal syntax with |, e.g.:
script:
- |
echo "this"
echo "is"
echo "an \
example"

xgettext dry run to avoid unnecessary commits in Github workflow

Is there any way to do a dry run of xgettext on source files, in order to simply check if there are any differences compared to the current .pot file?
I have set up a Github workflow that will run xgettext on source files any time a change to a source file is pushed to the repository. The result is that often the change to the source file didn't change the translation strings, so the only difference in the resulting .pot file is the Creation date, which gets updated every time xgettext is run. This makes for unnecessary commits, and triggers unnecessary webhook calls to my weblate instance which picks up on an "updated" .pot file, and winds up generating its own Pull Request with an "updated" .pot file.
If there were a way to do a dry run and first check if there are any actual differences in the strings, I could avoid unnecessary commits and PRs from polluting my repo. Any ideas?
I was able to add a filter to my Github workflow, checking whether there are any significant changes besides a simple update to the value of POT-Creation-Date in the .pot file. I added an id to the step that takes care of running xgettext, then after running xgettext I save the count of significant lines changed in the pot file to a variable that will be accessible to the next step:
- name: Update source file translation strings
id: update_pot
run: |
sudo apt-get install -y gettext
xgettext --from-code=UTF-8 --add-comments='translators:' --keyword="pgettext:1c,2" -o i18n/litcal.pot index.php
echo "::set-output name=POT_LINES_CHANGED::$(git diff -U0 | grep '^[+|-][^+|-]' | grep -Ev '^[+-]"POT-Creation-Date' | wc -l)"
Then I check against this variable before running the commit step:
- name: Push changes # push the output folder to your repo
if: ${{ steps.update_pot.outputs.POT_LINES_CHANGED > 0 }}
uses: actions-x/commit#v4
with:
# The committer's email address
email: 41898282+github-actions[bot]#users.noreply.github.com
# The committer's name
name: github-actions
# The commit message
message: regenerated i18n/litcal.pot from source files
etc.

How to get project version in gitlab-ci.yml file from the .net project file?

I am converting my project from Jenkins to GitLab CI. There is a .sh file which I am executing from .gitlab-ci.yml file where I am extracting the version from the project file using following statement:
VERSION=$(grep -oPm1 "(?<=)[^<]+" /Service.csproj
I am getting the project version and this is working fine.
How can I run the above statement in .gitlab-ci.yml file and assign the version value to a variable?
I tried running statement but I am getting error like invalid option "P"
I use in my example the sed command
stages:
- test
test:
stage: test
script:
- VERSION=$(sed -n "s/<Version>\(.*\)<\/Version>/\1/p" Service.csproj)
- echo $VERSION
Somehow above grep didn't work for me. Below works for me.
$VERSION=[regex]::match((Get-Content .\AssemblyFileVersion.cs), "AssemblyFileVersion\(`"(.*)`"\)").Groups[1].Value'
You need to execute the command in a script block, so your gitlab-ci.yml could look like this:
stages:
- test
test:
stage: test
script:
- VERSION=$(grep -oPm1 "(?<=)[^<]+" Service.csproj | xargs)
- echo $VERSION

Publishing test results to Azure (VS Database Project, tSQLt, Azure Pipelines, Docker)

I am trying to fully automate the build, test, and release of a database project using Azure Pipeline.
I already have a Visual Studio solution which consists of three database projects. The first project is the database, which contains the tables, stored procedures, functions, data, etc.. The second project is the tSQLt framework (v 1.0.5873.27393 if anyone is interested). And finally the third project is the tSQLt tests.
My goal here to check the solution into source control, and the pipeline will automatically build the solution, deploy the dacpacs to a build server (docker in this case), run the tSQLt tests, and publish the results back to the pipeline.
My pipeline works like this.
Building the visual studio solution
Publish the Artifacts
Setup a docker container running Ubuntu & SQL Server
Install SQLPackage
Deploy the dacpacs to the SQL instance
Run the tSQLt tests
Publish the test results
Everything up to publishing the results is working, but on this step I got the following error:
[warning]Failed to read /home/vsts/work/1/Results.xml. Error : Data at the root level is invalid. Line 1, position 1.
I added another step in the pipeline to display the content of the Results.xml file. It appears like this:
XML_F52E2B61-18A1-11d1-B105-00805F49916B
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
<testsuites><testsuite id="1" name="MyNewTestClassOne" tests="1" errors="0" failures="0" timestamp="2021-02-01T10:40:31" time="0.000" hostname="f6a05d4a3932" package="tSQLt"><properties/><testcase classname="MyNewTestClassOne" name="TestNumberOne" time="0.
I'm not sure if the column name and dashes should be in the file, but I'm guessing not. I added another step in to remove them, just leaving me with the XML. But this then gave me a different error to deal with:
##[warning]Failed to read /home/vsts/work/1/Results.xml. Error : There is an unclosed literal string. Line 2, position 1.
This one is a little obvious to spot, because as you'll see above, the XML is incomplete.
Here is the part of my pipeline which runs the tSQLt tests and outs the results to Results.xml
- script: |
sqlcmd -S 127.0.0.1,1433 -U SA -P Password.1! -d StagingDB -Q 'EXEC tSQLt.RunAll;'
displayName: 'tSQLt - Run All Tests'
- script: |
cd $(Pipeline.Workspace)
sqlcmd -S 127.0.0.1,1433 -U SA -P Password.1! -d StagingDB -Q 'SET NOCOUNT ON; EXEC tSQLt.XmlResultFormatter;' -o 'tSQLt_Results.xml'
displayName: 'tSQLt - Output Results'
I've research so many blogs and articles on this, and most people are doing the same. Some people use PowerShell instead of sqlcmd, but given I'm using a Ubuntu machine this isn't an option here.
I am all out of options, so I am looking for a little help on this.
You are dealing with 2 problems here. There is noise in your result set, that is not xml and your xml result is truncated after 256 characters. I can help you with both.
What I am doing is basically this:
/opt/mssql-tools/bin/sqlcmd \
-S "localhost, 31114" -U sa \
-P "password" \
-d dbname \
-y0 \
-Q "BEGIN TRY EXEC tSQLt.RunAll END TRY BEGIN CATCH END CATCH; EXEC tSQLt.XmlResultFormatter" \
| grep -w "<testsuites>" \
| tee "resultfile.xml"
Few things to note:
y0 important. This sets the length of the xml result set to unlimited, up from 256.
grep with a regular expression - make sure you only get the xml and not the noise around it.
If you want to run only a subset of your tests, you need to make amendments to the SQL query being passed in, but other than that, this is a catch it all "oneliner" to run all tests and get the results in xml format, readable by Azure DevOps

Resources