I've a GitLab Runner, connected to a real device (acting as server or similar). I'm using GitLab and my Runner to automate this device and basically remote-control it.
I want now to feed a variable to the device. In this case, an IP address, and i want to do it through GitLab variables.
The problem is that in runtime, the variable is not substituted with the actual IP address and it lands to the device in GitLab Variable format. This causes the server to send an error since it cannot understand the given value.
Is there any way i could use through GitLab variables that would allow me to give this device this parameter, being sure that the variable gets substituted with the actual parameter before being send?
Thank you
This is how i defined the parameter in the gitlab-ci file:
variables:
IPADDRESS_1: 170.x.x.10
...
The other file is the yml configuration file for my server (the actual device).
NOTICE: this is NOT the gitlab-ci file, but a server-specific yml configuration file, which is copied from GitLab to the server itself.
...
export_version: '0.2'
name: Example configuration yml
host: ${IPADDRESS_1}
...
This is the error message from the server (notice that the variable was not translated):
"errors": {
"invalid_ip": [
{
"id": null,
"name": "Example configuration yml",
"host": "${IPADDRESS_1}",
...
Regarding the questions itself - "GitLab variable not translated in runtime" - why would id? Gitlab runner simply executes commands in a shell of your choice, and a proper shell won't modify your files until you explicitly told it to.
The easiest way to substitute variables with values in a file using bash would be to use envsubst command. As per man:
envsubst - substitutes environment variables in shell format strings
In normal operation mode, standard input is copied to standard output, with references to environment variables of the form $VARIABLE or ${VARIABLE} being replaced with the corresponding values.
If a SHELL-FORMAT is given, only those environment variables that are referenced in SHELL-FORMAT are substituted; otherwise all environment variables references occurring in standard input are
substituted.
So in your case, assuming you've set IPADDRESS_1 to 170.x.x.10 in gitlab, and have a file called test.yaml with the following content:
"errors": {
"invalid_ip": [
{
"id": null,
"name": "Example configuration yml",
"host": "${IPADDRESS_1}",
Executing command
envsubst < test.yaml > substituted.yaml
Will create or overwrite, if you already have a file with this filename, substituted.yaml, with the following content:
"errors": {
"invalid_ip": [
{
"id": null,
"name": "Example configuration yml",
"host": "170.x.x.10",
Be aware, that you can't read and write to the same file using shell redirects (i.e. using envsubst < test.yaml > test.yaml, because doing so will just erase contents of test.yaml, see https://unix.stackexchange.com/questions/36261/can-i-read-and-write-to-the-same-file-in-linux-without-overwriting-it
To test this locally, you can use export IPADDRESS_1=170.x.x.10 before running previous command.
If you have some complex, and want to generate yaml (or any other text file), based on some variables, i suggest you to use some kind of templating engine, i.e. Jinja.
Related
The Azure DevOps pipeline has this variable:
Name: pat
Value: Git repo authentication token
The pipeline has a Bash script task. It is set to filepath. Filepath is set to script.sh. script.sh begins with:
git clone https://username:$(PAT)#dev.azure.com/company/project/_git/repo-name
Errors in pipeline logs:
PAT: command not found
Cloning into 'repo-name'...
fatal: Authentication failed for 'https://dev.azure.com/healthcatalyst/CAP/_git/docs-template/'
To validate that the authentication token and repo URL are accurate, I can verify this works when run as inline code:
git clone https://username:$(pat)#dev.azure.com/company/project/_git/repo-name
script.sh file is in repo-name.
However, environment variables work. Both of the following return the accurate value within the script. Note that one has no quotes and the other does.
echo $BUILD_REPOSITORY_NAME
repo-name
echo "$BUILD_REPOSITORY_NAME"
repo-name
Based on documentation I've seen (I am having difficulty with Microsoft's docs because I am not using a YAML file), I've tried unsuccessfully:
$pat
$PAT
$(PAT)
"$(PAT)"
gitToken=<backtick - Markdown is not allowing me to show a backtick here>echo $PAT<backtick>
Does anyone know what I'm doing wrong? Thank you for any tips.
Is your PAT variable a secret variable ?
If so, then it's not directly accessible in script files
As you can see in the documentation: https://learn.microsoft.com/en-us/azure/devops/pipelines/process/variables?view=azure-devops&tabs=yaml%2Cbatch#secret-variables
Unlike a normal variable, they are not automatically decrypted into environment variables for scripts. You need to explicitly map secret variables.
Example:
...
env:
MY_MAPPED_ENV_VAR: $(mySecret) # the recommended way to map to an env variable
Or if you are using the visual editor, like that:
Use System.AccessToken instead of personal PAT:
git clone https://$SYSTEM_ACCESSTOKEN#dev.azure.com/company/project/_git/repo-name
To enable $SYSTEM_ACCESSTOKEN: go to release page in ADO > Tasks > Agent job > check Allow scripts to access the OAuth token
I have a GitHub Actions workflow that uses Terraform for its deployment.
When Terraform is done, I want to take the Terraform output and send it to the next job in the workflow so that pieces can be extracted an used. Specifically, my Terraform deploys an Azure Function and then outputs the function app name. This then gets used to tell the next job where to deploy the Function code.
However, when I redirect the output of terraform output like so:
- name: save tf output
run: terraform output -json > tfoutput.json
shell: bash
working-directory: terraform
and then put it into a job artifact
- name: Upload output file
uses: actions/upload-artifact#v2
with:
name: terraform-output
path: terraform/tfoutput.json
the content resulting file looks like this:
[command]/home/runner/work/_temp/fb419afc-033e-4058-b5f3-c44b90cb0bd0/terraform-bin output -json
{
"functionappname": {
"sensitive": false,
"type": "string",
"value": "telemetry-function"
}
}
::debug::Terraform exited with code 0.
::debug::stdout: {%0A "functionappname": {%0A "sensitive": false,%0A "type": "string",%0A "value": "telemetry-function"%0A }%0A}%0A
::debug::stderr:
::debug::exitcode: 0
::set-output name=stdout::{%0A "functionappname": {%0A "sensitive": false,%0A "type": "string",%0A "value": "telemetry-function"%0A }%0A}%0A
::set-output name=stderr::
::set-output name=exitcode::0
Which means, of course, it's definitely not machine readable as JSON output from Terraform should be.
I've yet to find any way to get all that extraneous junk to be removed. It's worth noting that in Azure DevOps this flow of work performs exactly as one would expect.
I took the approach of putting everything in one job (to avoid redirecting output and passing an artifact) and then using terraform output | jq -r ... to get the output from terraform into my jq statement to pull out the value and it still doesn't work. It appears the output from this command really is all that junk, for some reason.
Not sure if this is something I can work around, a bug in the terraform action, or a bug in GH Actions in general.
Additionally, where do I file bugs on GH Actions???
The solution is to add terraform_wrapper: false to your Setup Terraform step:
- name: Setup terraform
uses: hashicorp/setup-terraform#v1
with:
terraform_version: ${{ env.TERRAFORM_VERSION }}
terraform_wrapper: false
as, by default, the Terraform Action will wrap all its output in this junk. 🤷🏻♂️
I want to set quite a few variables in Jenkins. I have tried putting them in .bashrc, .bash_profile and .profile of the jenkins user but Jenkins cannot find them when a build is happening.
The only way that is working is to put all the env variables inside the Jenkinsfile like this:
env.INTERCOM_APP_ID = '12312'
env.INTERCOM_PERSONAL_ACCESS_TOKEN = '1231'
env.INTERCOM_IDENTITY_VERIFICATION_KEY='asadfas'
But I don't think this is a good way of doing it.
What is the correct way of setting env variables in Jenkins?
To me, it seems very normal. INTERCOM_PERSONAL_ACCESS_TOKEN and INTERCOM_IDENTITY_VERIFICATION_KEY should be considered as text credentials and you can use the environment directive to add environment variables.
stages {
stage('Example') {
environment {
INTERCOM_APP_ID = '12312'
INTERCOM_PERSONAL_ACCESS_TOKEN = credentials('TokenCrednetialsID')
INTERCOM_IDENTITY_VERIFICATION_KEY = credentials('VerificationCrednetialsID')
}
steps {
echo "Hello ${env.INTERCOM_APP_ID}"
}
}
}
If you need to keep environment variables separate from JenkinsFile you can create a groovy file which contains all of those and then load that file into Jenkinsfile using
load "$JENKINS_HOME/.envvars/stacktest-staging.groovy"
For more information take a look at following links
https://jenkins.io/doc/pipeline/steps/workflow-cps/
SO: Load file with environment variables ...
Jenkins resets environment variables to some defaults for their jobs. Best way to set them is in jenkins configuration. You can set global vars, local for project or local for node.
Now i do not remember if this feature is build in or provided by some plugin.
I am using Go server for continuous integration of our code. For my environment-deploy-template, I wish to set certain environment variables on the stage and then echo those in the property files for the application. What would be the Linux command that I could give in my job to do so?
For example, it could be some thing like :
echo "propName=#{env variable}\n">>prop files location
Could someone please confirm this?
The syntax to get go.cd env variable is ${ENV_VAR} and a full command is:
echo propName=${ENV_VAR} >> props.txt
More details on environment variables: Using Environment Variables in Go
I'm trying to use Boxen to setup our dev environment. We have a number of repos that we want to pull down and run a script to get started. We landed on a convention: repos have a scripts/ directory with a bootstrap script that needs to be run.
It looks like this would be possible with the exec command. But in order to tell it what to run, I have to access the repo's directory. Other scripts use $repo_dir or ${boxen::config:srcdir}/${name}. I've tried each of these, and a number of different styles of exec, to no avail.
The Manifest
class projects::hero {
include ruby
boxen::project { 'hero':
ruby => '2.0.0',
source => 'myorg/hero'
}
->
Exec {
command => '$repo_dir/scripts/echo'
}
->
notify {'hero is running at $srcdir':}
}
This is simpler than the stated goal. The scripts need to be run within the directory they reside. So my first (and hopefully eventual) manifest would have something like this for the exec step:
->
exec { 'running bootstrap on hero':
command => '$repo_dir/scripts/bootstrap',
cwd => '$repo_dir/scripts'
}
The script
For right now, scripts/echo is super simple:
#!/bin/bash
echo "Echo File!"
touch `date`
Since the output isn't really going to be seen, we're making a file with the date so we can observe this side effect and know that the script actually ran.
Calling boxen
I just call this project directly from the manifests directory:
Chris:manifests chris$ boxen hero
The output
Warning: Scope(Class[Boxen::Environment]): Setting up 'hero'. This can be made permanent by having 'include projects::hero' in your personal manifest.
Error: Could not find resource 'command => $repo_dir/scripts/echo' for relationship from 'Boxen::Project[hero]' on node chris.local
Error: Could not find resource 'command => $repo_dir/scripts/echo' for relationship from 'Boxen::Project[hero]' on node chris.local
This is also true if I try ${boxen::config::srcdir} instead. Looking at other examples, these variables are used and seem to work. Am I calling it wrong? Is there a different variable I should be using?
I've noticed two mistakes in your manifest here:
->
Exec {
command => '$repo_dir/scripts/echo'
}
->
The first is that you've capitalized the first letter of exec. In puppet language this means you are specifying a default for all subsequent exec resource definitions (docs). This is not a resource definition itself, therefore resource ordering can not be applied, hence the error.
Another mistake is the use of single quotes in combination with variables. Single quoted strings are interpreted as literals. In other words, '$repo_dir' is interpreted literally as $repo_dir while "$repo_dir" is interpreted as the contents of the varialbe $repo_dir (docs).
Hope this helps,
Good luck