I want to use arrays in variables of my gitlab ci/cd yml file, something like that:
variables:
myarrray: ['abc', 'dcef' ]
....
script: |
echo myarray[0] myarray[1]
But Lint tells me that file is incorrect:
variables config should be a hash of key value pairs, value can be a hash
I've tried the next:
variables:
arr[0]: 'abc'
arr[1]: 'cde'
....
script: |
echo $arr[0] $arr[1]
But build failed and prints out bash error:
bash: line 128: export: `arr[0]': not a valid identifier
Is there any way to use array variable in .gitlab-ci.yml file?
According to the docs, this is what you should be doing:
It is not possible to create a CI/CD variable that is an array of values, but you can use shell scripting techniques for similar behavior.
For example, you can store multiple variables separated by a space in a variable, then loop through the values with a script:
job1:
variables:
FOLDERS: src test docs
script:
- |
for FOLDER in $FOLDERS
do
echo "The path is root/${FOLDER}"
done
After some investigations I found some surrogate solution. Perhaps It may be useful for somebody:
variables:
# Name of using set
targetType: 'test'
# Variables set test
X_test: 'TestValue'
# Variables set dev
X_dev: 'DevValue'
# Name of variable from effective set
X_curName: 'X_$targetType'
.....
script: |
echo Variable X_ is ${!X_curName} # prints TestValue
Another approach you could follow is two use a matrix of jobs that will create a job per array entry.
deploystacks:
stage: deploy
parallel:
matrix:
- PROVIDER: aws
STACK: [monitoring, app1]
- PROVIDER: gcp
STACK: [data]
tags:
- ${PROVIDER}-${STACK}
Here is the Gitlab docs regarding matrix
https://docs.gitlab.com/ee/ci/jobs/job_control.html#run-a-one-dimensional-matrix-of-parallel-jobs
variable big_var_01 defined with value '3q4w#V$X3q4w#V$X' by following azure pipeline yaml file gets corrupted to become value '3q4w#V#V' when read back in azure pipeline template
cat parent_scott.yaml
variables:
- name: big_var_01
value: ${{ parameters.big_var_01 }}
parameters:
- name: big_var_01
displayName: "this var wants to get written to then read by templates"
type: string
default: '3q4w#V$X3q4w#V$X'
# CI Triggers
trigger:
branches:
exclude:
- '*'
pool:
vmImage: 'ubuntu-latest'
# Release Stages
stages:
- template: child_scott_one.yaml
following azure pipeline template variable big_var_01 is read back however its value is corrupted and does not match above assignment
cat child_scott_one.yaml
# Release Stages
stages:
- stage: A
jobs:
- job: JA
steps:
- script: |
echo "here is big_var_01 -->$(big_var_01)<-- "
local_var_01=$(big_var_01)
echo
echo "here is local_var_01 -->$local_var_01<-- "
echo
echo "length of local_var_01 is ${#local_var_01}"
echo
name: DetermineResult
see run of above pipeline
https://dev.azure.com/sekhemrekhutawysobekhotep/public_project/_build/results?buildId=525&view=logs&j=54e3124b-25ae-54f7-b0df-b26e1988012b&t=52fad91f-d6ac-51fb-b63d-00fda7898bb6&l=13
see code at https://github.com/sekhemrekhutawysobekhotep/shared_variables_across_templates
How to make the string variable big_var_01 get treated as a literal evidently its somehow getting evaluated and thus corrupted ... above code is a simplification of my actual azure pipeline where I get same variable corruption issue even when setting up a Key Value secret with value 3q4w#V$X3q4w#V$X which gets corrupted when read back in a pipeline template
here is another pipeline run which explicitly shows this issue https://dev.azure.com/sekhemrekhutawysobekhotep/public_project/_build/results?buildId=530&view=logs&j=ed5db508-d8c1-5154-7d4e-a21cef45e99c&t=a9f49566-82d0-5c0a-2e98-46af3de0d6e9&l=38 on this run I check marked ON pipeline run option: "Enable system diagnostics" ... next I will try to single quote my shell assignment from the azure variable
At some step Azure DevOps or Ubuntu replaced part of your string. So you have:
3q4w#V$X3q4w#V$X = 3q4w#V + $X3q4w + #V + $X
and this part $X3q4w and this $X was replaced with empty string giving you 3q4w#V + #V.
If you run this with \ before $ like here 3q4w#V\$X3q4w#V\$X
This is job Foo.
here is big_var_01 -->3q4w#V$X3q4w#V$X<--
here is local_var_01 -->3q4w#V$X3q4w#V$X<--
length of local_var_01 is 16
I got an error running this on windows-latest however I got correct string:
"This is job Foo."
"here is big_var_01 -->3q4w#V$X3q4w#V$X<-- "
'local_var_01' is not recognized as an internal or external command,
operable program or batch file.
ECHO is off.
"here is local_var_01 -->$local_var_01<-- "
ECHO is off.
"length of local_var_01 is ${#local_var_01}"
ECHO is off.
##[error]Cmd.exe exited with code '9009'.
so it looks like ubuntu replaces it with env variables however, since there is not variables like $X3q4w and $X it replaces with empty string.
Found a solution ... it works if I single quote around the azure variable during bash shell assignment
local_var_01=$(big_var_01) # bad results with value 3q4w#V#V
local_var_02="$(big_var_01)" # bad results with value 3q4w#V#V
local_var_03='$(big_var_01)' # good this gives value 3q4w#V$X3q4w#V$X
my instincts said to not use single quotes knowing its not the bash shell way however once I accepted the fact azure performs some magic ~~helping~~ interstigal layer pre processing in between pipeline source code and fully resolved bash shell execution I took a chance and tried this ... come to find out this is how bash shell blocks variable expansion
variables:
buildSelected: '1.0.0.1234'
steps:
- powershell: |
Write-Host "Build Selected $(buildSelected)"
Write-Host "Escaped '$(buildSelected)'"
displayName: "Escape variable"
I would like the value 1.0.0.1234 & '$(buildSelected)' to be printed instead of what it's printing now:
Build Selected 1.0.0.1234
Escaped '1.0.0.1234'
Sorry but I'm afraid Azure Devops doesn't provide the feature to escape a pipeline variable. If the variable is used in this format $(var), it will always be replaced with its value when using Write-Host to output it.
As I know in Powershell syntax, only the ` can be used to escape variables. See:
Write-Host "Build Selected `$`(buildSelected)"
Its output : Build Selected $(buildSelected)
Not sure if it's what you need, but escaping pipeline variables with complete $(var) is not supported. Azure Devops will always replace it with its value if it matches the $(var) format.
I had the same problem but in bash and solved it by adding the invisible character named "ZERO WIDTH SPACE" between "$" and "(". This way I can print out "$(Build.SourceVersion)" without it being replaced with the actual value.
I copied the character from https://invisible-characters.com/
---
trigger: none
steps:
- script: |
echo "$(Build.SourceVersion): $(Build.SourceVersion)"
displayName: Test Pipeline Variable Escaping
I'm building a lambda function on aws with terraform. The syntax within the terraform script for uploading an env var is:
resource "aws_lambda_function" "name_of_function" {
...
environment {
variables = {
foo = "bar"
}
}
}
Now I have a .env file in my repo with a bunch of variables, e.g. email='admin#example.com', and I'd like terraform to read (some of) them from that file and inject them into the space for env vars to be uploaded and available to the lambda function. How can this be done?
This is a pure TerraForm-only solution that parses the .env file into a native TerraForm map.
output "foo" {
value = { for tuple in regexall("(.*)=(.*)", file("/path/to/.env")) : tuple[0] => tuple[1] }
}
I have defined it as an output to do quick tests via CLI but you can always define it as a local or directly on an argument.
My solution for now involves 3 steps.
Place your variables in a .env file like so:
export TF_VAR_FOO=bar
Yes, it's slightly annoying that you HAVE TO prefix your vars with TF_VAR_, but I've learned to live with it (at least it makes it clear in your .env what vars will be used by terraform, and which will not.)
In your TF script, you have to declare any such variable without the TF_VAR_ prefix. E.g.
variable "FOO" {
description = "Optionally say something about this variable"
}
Before you run terraform, you need to source your .env file (. .env) so that any such variable is available to processes you want to launch in your present shell environment (viz. terraform here). Adding this step doesn't bother me since I always operate my repo with bash scripts anyway.
Note: I put in a request for a better way to handle .env files here though I'm actually now quite happy with things as is (it was just poorly described in the official documentation how TF relates to .env files IMO).
Based on #Uizz Underweasel response, I just add the sensitive function for security purposes:
Definition in variables.tf - more secure
locals {
envs = { for tuple in regexall("(.*)=(.*)", file(".env")) : tuple[0] => sensitive(tuple[1]) }
}
Definition in variables.tf - less secure
locals {
envs = { for tuple in regexall("(.*)=(.*)", file(".env")) : tuple[0] => tuple[1] }
}
An example of .env
HOST=127.0.0.1
USER=my_user
PASSWORD=pass123
The usage
output "envs" {
value = local.envs["HOST"]
sensitive = true # this is required if the sensitive function was used when loading .env file (more secure way)
}
Building on top of the great advice from #joe-jacobs, to avoid having to prepend to all variables with the TF_VAR_ prefix, encapsulate the call to the terraform command with the excellent https://github.com/cloudposse/tfenv utility.
This means you can leave the vars defined as FOO=bar in the .env file, useful to re-use them for other purposes.
Then run a command like:
dotenv run tfenv terraform plan
Terraform will be able to find an env variable TF_VAR_FOO set to bar. 🎉
You could use terraform.tfvars or .auto.tfvars file. The second one is probably more alike the .env, because it's hidden too.
Not exactly bash .env format, but something very similar.
For example:
.env
var1=1
var2="two"
.auto.tfvars
var1 = 1
var2 = "two"
Also you can use json format. Personally, i've never done it, but it's possible.
It's described in the official docs:
Terraform also automatically loads a number of variable definitions files if they are present:
Files named exactly terraform.tfvars or terraform.tfvars.json.
Any files with names ending in .auto.tfvars or .auto.tfvars.json.
You still have to declare variables though:
variable "var1" {
type = number
}
variable "var2" {
type = string
}
I ended up writing a simple PHP script to read the env file in, parse it how I wanted (explode(), remove lines with '#', no other vars etc then implode() again), and just eval that in my makefile first. Looks like this:
provision:
#eval $(php ./scripts/terraform-env-vars.php); \
terraform init ./deployment/terraform; \
terraform apply -auto-approve ./deployment/terraform/
Unfortunately terraform had the ridiculous idea that all environmental variables must be prefixed with TF_VAR_.
I solved this with a combination of grep and sed, with the idea that I would regex replace all environment variables with the required prefix.
Firstly you need to declare an input variable with the same name as the environment variable in your .tf file:
variable "MY_ENV_VAR" {
type = string
}
Then, before the terraform command, use the following:
export $(shell sed -E 's/(.*)/TF_VAR_\1/' my.env | grep -v "#" | grep -v "TF_VAR_$"); terraform apply
What this does:
Uses sed to grab each line in a capture group with (.*), then in the replacement prefixes TF_VAR with the first capture group (\1)
Uses grep to remove all lines that have a comment on (anything with a #).
Uses grep to remove all lines that only have TF_VAR.
Unfortunately this also has a bunch of other environmental variables created with the prefix TF_VAR and I'm not sure why, but this is at least a start to using .env files with terraform.
I like python, and I don't like polluting my environmental variables, i used a python3 virtual environment and the python-dotenv[cli] package. I know there's a dotenv in other languages, those may work also.
Here's what I did...
Looks something like this on my Mac
set up your virtual environment, activate it, install packages
python3 -m venv venv
source venv/bin/activate
pip install "python-dotenv[cli]"
Put your environment variables into a .env file in the local directory
foo1 = "bar1"
foo2 = "bar2"
Then when you run terraform, use the dotenv cli prefix
dotenv run terraform plan
When you're done with your virtual environment
deactivate
To start again at a later date, from your TF directory
source venv/bin/activate
This allows me to load up environment variables when i run other TF without storing them.
I have a yml file used by an Azure pipeline for configuration.
variables:
CHANGE_URL : $(System.PullRequest.SourceRepositoryURI)/pull/$(System.PullRequest.PullRequestNumber)
The resulting variable CHANGE_URL is: https://github.com/username/project-boilerplate.git/9
The values are coming from Azure's predefined system variables. I'm trying to remove the '.git' from this string. I tried
CHANGE_URL : sed 's/...$//' <<< $(System.PullRequest.SourceRepositoryURI) but that did not work. I'm not sure how much control I have with yml files.
you need to have a script step that does that:
- bash: |
value=$(sed 's/...$//' <<< $(System.PullRequest.SourceRepositoryURI))
echo "##vso[task.setvariable variable=CHANGE_URL]$value"
and then in your subsequent steps you'd have a variable CHANGE_URL with the value you needed