Terraform: Property file to env section - terraform

I have a .properties and I'd like to apply a template to generate env {} entries to my deployment.
Something like this:
{{- range $key, $value := .Values.configurationOverrides }}
env {
name = {{ printf "REST_%s" $key | replace "." "_" | upper | quote }}
value =
Generate env {} entries:
...
spec {
container {
image = "**"
name = "rest"
env {
name = "REST_FROM_PROPERTIES_FILE"
value = "VALUE FROM PROPERTIES FILE"
}
...
Is it possible?
Thanks.

Note: I assume you use Terraform 0.12.x
One part of the solution is dynamic blocks which allow you to loop over a list of values and dynamically add env sections:
properties_list = [{ name: "MY_ENV_VAR", value: "VALUE", },]
dynamic "env" {
for_each = var.properties_list
name = env.value.name
value = env.value.value
}
The hard part is parsing the properties file, as there is no out of the box way to do it in Terraform. You can use something like jsondecode to decode JSON into a terraform object. So you could first transform your properties file into a JSON file (For example, using this npm package) and then decode that into a terraform object.

Following the help of Blokje5 I managed to make it with variables. It's not really loading from properties file, but it fitted my needs (which was not change Terraform code when need to change config/env vars) :
File terraform.tfvars:
kafka-rest-envs = {
KAFKA_REST_BOOTSTRAP_SERVERS = "localhost:9092"
KAFKA_REST_HOST_NAME = "hostnamey"
KAFKA_REST_ID = "kafka-rest"
KAFKA_REST_LISTENERS = "http://0.0.0.0:8082"
KAFKA_REST_CLIENT_SASL_JAAS_CONFIG = "***"
KAFKA_REST_CLIENT_SECURITY_PROTOCOL = "SASL_SSL"
KAFKA_REST_CLIENT_SSL_ENDPOINT_IDENTIFICATION_ALGORITHM = "https"
KAFKA_REST_CONSUMER_RETRY_BACKOFF_MS = "500"
KAFKA_REST_CONSUMER_REQUEST_TIMEOUT_MS = "25000"
KAFKA_REST_PRODUCER_ACKS = "1"
KAFKA_REST_CLIENT_SASL_MECHANISM = "PLAIN"
KAFKA_REST_ADMIN_REQUEST_TIMEOUT_MS = "50000"
KAFKA_REST_KEY_SERIALIZER = "io.confluent.kafka.serializers.KafkaAvroSerializer"
KAFKA_REST_VALUE_SERIALIZER = "io.confluent.kafka.serializers.KafkaAvroSerializer"
}
And the Terraform:
...
spec {
container {
image = "confluentinc/cp-kafka-rest"
name = "kafka-rest"
dynamic "env" {
for_each = var.kafka-rest-envs
content {
name = env.key
value = env.value
}
}
...
Thank you!

Related

How to merge a default list of Object with values from tfvars (Terraform)

I'm trying to achieve something with Terraform and I getting some trouble to find any solution.
I define a variable like this:
variable "node_pools" {
type = list(object({
name = string
location = string
cluster = string
initial_node_count = number
tag = string
}))
default = [
{
name = "default-pool"
cluster = "cluster_name"
location = "usa"
initial_node_count = 3
tag = "default"
}]
}
Then, in my tfvar file, I define my node_pools like that:
node_pools = [
{
name = "pool-01"
tag = "first-pool"
},
{
name = "pool-highmemory"
tag = "high-memory"
}
]
Then, in my main.tf I try to use my node_pools variable but I need them filled with the default values which are not specified in the tfvar, like location, cluster, etc..
I think I need to merge, but I don't find any way to achieve that.
Thanks for any help.
The default value for a whole variable is only for situations where the variable isn't assigned any value at all.
If you want to specify default values for attributes within a nested object then you can use optional object type attributes and specify the default values inline inside the type constraint, like this:
variable "node_pools" {
type = list(object({
name = string
location = optional(string, "usa")
cluster = optional(string, "cluster_name")
initial_node_count = optional(number, 3)
tag = optional(string, "default")
}))
default = [
{
name = "default-pool"
},
]
}
Because these default values are defined directly for the nested attributes they apply separately to each element of the list of objects.
If the caller doesn't set a value for this variable at all then the single object in the default value will be expanded using the default values to produce this effective default value:
tolist([
{
name = "default-pool"
location = "usa"
cluster = "cluster_name"
initial_node_count = 3
tag = "default"
},
])
With the tfvars file you showed the effective value would instead be the following:
tolist([
{
name = "pool-01"
location = "usa"
cluster = "cluster_name"
initial_node_count = 3
tag = "first-pool"
},
{
name = "pool-highmemory"
location = "usa"
cluster = "cluster_name"
initial_node_count = 3
tag = "high-memory"
},
])

Is there possibility to dynamicly pass user-defined variable (key = value) to terraform module?

There is resource:
resource "resource_name" "foo" {
name = "test"
config {
version = 14
resources {
disk_type_id = "network-ssd"
}
postgresql_config = {
enable_parallel_hash = true
}
}
}
I need a module which accepts optional user variables in "postgresql_config". There can be many such variables.
I tried next:
variables.tf
variable "postgresql_config" {
description = "User defined for postgresql_config"
type = list(object({
# key1 = value1
# ...
# key50 = value50
}))
}
variable "config" {
description = "for dynamic block 'config' "
type = list(object({
version = number
}))
default = [{
version = 14
}]
}
variable "resources" {
description = "for dynamic block 'resources' "
type = list(object({
disk_type_id = string
}))
default = [{
disk_type_id = "network-hdd"
}]
}
module/postgresql/main.tf
resource "resource_name" "foo" {
name = "test"
dynamic "config" {
for_each = var.config
content {
version = config.value["version"]
dynamic "resources" {
for_each = var.resources
content {
disk_type_id = resources.value["disk_type_id"]
}
}
# problem is here
postgresql_config = {
for_each = var.postgresql_config
each.key = each.value
}
}
}
example/main.tf
module "postgresql" {
source = "../module/postgresql"
postgresql_config = [{
auto_explain_log_buffers = true
log_error_verbosity = "LOG_ERROR_VERBOSITY_UNSPECIFIED"
max_connections = 395
vacuum_cleanup_index_scale_factor = 0.2
}]
That is, I understand that I need to use "dynamic", but it can only be applied to the block "config" and the nested block "resource_name".
How can I pass values for "postgresql_config" from main.tf to module? Of course, my example with for_each = var.postgresql_config doesn't work, but I hope this way to give an idea of what I need.
Or does terraform have no such option to use custom variables dynamically at all, and all of them must be specified explicitly?
Any help would be appreciated, thank you
from what I understand , you are trying to create a map dynamically for your resource postgres_config.
I would recommend using a for expression to solve that problem.
However, I think your problem lies in how you have defined variables for your module . You might run into a problem if your postgress_config list has multiple configs in it because that config can only take a map by the looks of it.
have a look at the following documentation:
this one is for how to define your variables
https://www.terraform.io/language/expressions/dynamic-blocks#multi-level-nested-block-structures
for expressions
https://www.terraform.io/language/expressions/for
my solution for your config problem ,would be something like this assuming that the postgres_config list has one element all the time:
# problem is here
postgresql_config = var.postgresql_config[0]

Iterate over list of maps

I'm trying to iterate over a simple list of maps. Here's a segment of what my module code looks like:
resource "helm_release" "nginx-external" {
count = var.install_ingress_nginx_chart ? 1 : 0
name = "nginx-external"
repository = "https://kubernetes.github.io/ingress-nginx"
chart = "ingress-nginx"
version = var.nginx_external_version
namespace = "default"
lint = true
values = [
"${file("chart_values/nginx-external.yaml")}"
]
dynamic "set" {
for_each = { for o in var.nginx_external_overrides : o.name => o }
content {
name = each.value.name
value = each.value.value
}
}
}
variable "nginx_external_overrides" {
description = "A map of maps to override customizations from the default chart/values file."
type = any
}
And here's a snippet of how I'm trying to call it from terragrunt:
nginx_external_overrides = [
{ name = "controller.metrics.enabled", value = "false" }
]
When trying to use this in a dynamic block, I'm getting:
Error: each.value cannot be used in this context
A reference to "each.value" has been used in a context in which it
unavailable, such as when the configuration no longer contains the value in
its "for_each" expression. Remove this reference to each.value in your
configuration to work around this error.
Ideally, I would be able to pass any number of maps in nginx_external_overrides to override the settings in the yaml being passed, but am struggling to do so. Thanks for the help.
If you are using for_each in dynamic blocks, you can't use each. Instead, in your case, it should be set:
dynamic "set" {
for_each = { for o in var.nginx_external_overrides : o.name => o }
content {
name = set.value.name
value = set.value.value
}
}

Adding extraEnv to helm chart via terraform and terragrunt

I need to set additional variables in my value.yaml (link to jaeger https://github.com/jaegertracing/helm-charts/blob/main/charts/jaeger/values.yaml#L495) helm chart via terraform + terragrunt. In values.yaml, the code looks like this:
spark:
extraEnv: []
It is necessary that it be like this:
spark:
extraEnv:
- name: JAVA_OPTS
value: "-Xms4g -Xmx4g"
Terraform uses this dynamic block:
dynamic "set" {
for_each = var.extraEnv
content {
name = "spark.extraEnv [${set.key}]"
value = set.value
}
}
The variable is defined like this:
variable "extraEnv" {
type = map
}
From terragrunt I pass the value of the variable:
extraEnv = {
"JAVA_OPTS" = "-Xms4g -Xmx4g"
}
And I get this error:
Error: failed parsing key "spark.extraEnv [JAVA_OPTS]" with value -Xms4g -Xmx4g, error parsing index: strconv.Atoi: parsing "JAVA_OPTS": invalid syntax
on main.tf line 16, in resource "helm_release" "jaeger":
16: resource "helm_release" "jaeger" {
Tell me how to use the dynamic block correctly in this case. I suppose that in this case you need to use a list of maps, but I do not understand how to use this in a dynamic block.
UPD:
I solved my problem in a different way.
In values, defined the list 'spark.extraEnv' using yamlencode.
values = [
"${file("${path.module}/values.yaml")}",
yamlencode({
spark = {
extraEnv = var.spark_extraEnv
}
})
]
in variables.tf
variable "spark_extraEnv" {
type = list(object({
name = string
value = string
}))
}
And in terragrunt passed the following variable value:
spark_extraEnv = [
{
name = "JAVA_OPTS"
value = "-Xms4g -Xmx4g"
}
]
I landed here while I was looking for setting extraEnv for a different chart. Finally figured answer for the above question as well:
set {
name = "extraEnv[0].name"
value = "JAVA_OPTS"
}
set {
name = "extraEnv[0].value"
value = "-Xms4g -Xmx4g"
}

terraform nested interpolation with count

Using terraform I wish to refer to the content of a list of files (ultimately I want to zip them up using the archive_file provider, but in the context of this post that isn't important). These files all live within the same directory so I have two variables:
variable "source_root_dir" {
type = "string"
description = "Directory containing all the files"
}
variable "source_files" {
type = "list"
description = "List of files to be added to the cloud function. Locations are relative to source_root_dir"
}
I want to use the template data provider to refer to the content of the files. Given the number of files in source_files can vary I need to use a count to carry out the same operation on all of them.
Thanks to the information provided at https://stackoverflow.com/a/43195932/201657 I know that I can refer to the content of a single file like so:
provider "template" {
version = "1.0.0"
}
variable "source_root_dir" {
type = "string"
}
variable "source_file" {
type = "string"
}
data "template_file" "t_file" {
template = "${file("${var.source_root_dir}/${var.source_file}")}"
}
output "myoutput" {
value = "${data.template_file.t_file.rendered}"
}
Notice that that contains nested string interpolations. If I run:
terraform init && terraform apply -var source_file="foo" -var source_root_dir="./mydir"
after creating file mydir/foo of course then this is the output:
Success!
Now I want to combine that nested string interpolation syntax with my count. Hence my terraform project now looks like this:
provider "template" {
version = "1.0.0"
}
variable "source_root_dir" {
type = "string"
description = "Directory containing all the files"
}
variable "source_files" {
type = "list"
description = "List of files to be added to the cloud function. Locations are relative to source_root_dir"
}
data "template_file" "t_file" {
count = "${length(var.source_files)}"
template = "${file("${"${var.source_root_dir}"/"${element("${var.source_files}", count.index)}"}")}"
}
output "myoutput" {
value = "${data.template_file.t_file.*.rendered}"
}
yes it looks complicated but syntactically, its correct (at least I think it is). However, if I run init and apply:
terraform init && terraform apply -var source_files='["foo", "bar"]' -var source_root_dir='mydir'
I get errors:
Error: data.template_file.t_file: 2 error(s) occurred:
* data.template_file.t_file[0]: __builtin_StringToInt: strconv.ParseInt: parsing "mydir": invalid syntax in:
${file("${"${var.source_root_dir}"/"${element("${var.source_files}", count.index)}"}")}
* data.template_file.t_file1: __builtin_StringToInt: strconv.ParseInt: parsing "mydir": invalid syntax in:
${file("${"${var.source_root_dir}"/"${element("${var.source_files}", count.index)}"}")}
My best guess is that its interpreting the / as a division operation hence its attempting to parse the value mydir in source_root_dir as an int.
I've played around with this for ages now and can't figure it out. Can someone figure out how to use nested string interpolations together with a count in order to refer to the content of multiple files using the template provider?
OK, I think I figured it out. formatlist to the rescue
provider "template" {
version = "1.0.0"
}
variable "source_root_dir" {
type = "string"
description = "Directory containing all the files"
}
variable "source_files" {
type = "list"
description = "List of files to be added to the cloud function. Locations are relative to source_root_dir"
}
locals {
fully_qualified_source_files = "${formatlist("%s/%s", var.source_root_dir, var.source_files)}"
}
data "template_file" "t_file" {
count = "${length(var.source_files)}"
template = "${file(element("${local.fully_qualified_source_files}", count.index))}"
}
output "myoutput" {
value = "${data.template_file.t_file.*.rendered}"
}
when applied:
terraform init && terraform apply -var source_files='["foo", "bar"]' -var source_root_dir='mydir'
outputs:
Apply complete! Resources: 0 added, 0 changed, 0 destroyed.
Outputs:
myoutput = [
This is the content of foo
,
This is the content of bar
]

Resources