passing function statement as string in terraform - azure

is it possible to somehow pass and decode the variable in quotes from main.tf to test.tf.
I am trying to do some calculation. But it is different for all the modules.
I am wondering is we can pass something like this from one file:
main.tf
test = "length(regexall(\"file\", each.key)) > 0 ? [for n in var.dns : n][0] : [for n in var.dns : n][1]"
test.tf
Read it in other without quotes and escape character??
test = length(regexall("file", each.key)) > 0 ? [for n in var.dns : n][0] : [for n in var.dns : n][1]
I already tried using replace and trim. It isn't working.
Thank you

Sadly such a thing is not supported in terraform. You can't dynamically pass TF code as strings and then evaluate the string as a code, like eval in other languages.
The closest would to use custom data source which you program yourself. Since you have to develop it, you can program any logic you want.

Related

How can I fix for_each" value depends on resource attributes that cannot be determined until apply

Context: I'm aware of the similar questions:
The "for_each" value depends on resource attributes that cannot be determined (Terraform)
for_each value depends on resource attributes that cannot be determined until apply, so Terraform cannot predict how many instances will be created
but I think mine is a bit different and it might be fixed by refactoring TF code since there's an additional input restriction.
My original example is very long so I came up with a minimum viable example instead:
I've got an input variable of type map that maps all possible numbers to names:
# tfvars.terraform
all_names_by_number = {
"1" = "alex",
"3" = "james",
"5" = "ann",
"8" = "paul",
}
# main.tf
locals {
# active_names_by_number is a map as well
# but it's a subset of all_names_by_number
# all_names_by_number = {
# "3" = "james",
# "5" = "ann",
# }
active_names_by_number = people_resource.example.active_names_map
}
# Resource that depedns on active_names_by_number
resource "foo" "active_items" {
for_each = local.active_names_by_number
name = "abc-${each.key}"
location = var.location
sub_id = data.zoo.sub[each.key].id
bar {
bar_name = each.value
}
}
When I run the terraform configuration above via terraform plan, I get:
Error: Invalid for_each argument
on main.tf line 286, in resource "foo" "active_items":
286: for_each = for_each = local.active_names_by_number
The "for_each" value depends on resource attributes that cannot be determined
until apply, so Terraform cannot predict how many instances will be created.
To work around this, use the -target argument to first apply only the
resources that the for_each depends on.
which totally makes sense since people_resource.example.active_names_map is "initialized" in runtime from another resource (response)
locals {
active_names_by_number = people_resource.example.active_names_map
}
but given the fact that active_names_by_number is a subset of all_names_by_number (input variable), how can I refactor the terraform configuration to show TF that local.active_names_by_number is bounded?
My ideas so far:
Use count instead of for_each as other answers suggest but I do need to use each.value in my example (and I can't use all_names_by_number to create extra resources.
Get rid of local.active_names_by_number and use var.all_names_by_number instead -- the major downside is TF will create extra resources which is pretty expensive.
Somehow write a nested for loop:
# pseudocode
for name in var.all_names_by_number:
if name is in people_resource.example.active_names_map:
# create an instance of foo.active_item
I also faced the same issue, so I followed the method of splitting it into two modules which #Alex Kuzminov suggested. Thanks for that.
So Instead of using for_each, use count and along with that use try block.
locals {
active_names_by_number = try(people_resource.example.active_names_map, tolist(["a", "b", "c", "d"]))
}
So initial exception will be resolved while terraform apply, then while actual resource running, it will replace with the actual content instead of a,b,c,d.
I hope this helps. Thanks.

Terraform Inappropriate value for attribute "subnets": element 0: string required

I am upgrading from tf 11 to tf 12. I've run into the issue where terraform plan produces the following error:
Error: Incorrect attribute value type
4: subnets = ["${var.alb_subnets}"]
Inappropriate value for attribute "subnets": element 0: string required.
The code snippet for this error is:
resource aws_alb "alb" {
name = "ecs-${var.app_name}"
internal = "${var.internal}"
subnets = ["${var.alb_subnets}"]
security_groups = ["${var.security_groups}"]
count = "${var.do_module}"
}
If anyone can help me with this I would appreciate it.
Change subnets = ["${var.alb_subnets}"] to subnets = var.alb_subnets
Its a update in terraform v0.12
Reference: https://www.terraform.io/upgrade-guides/0-12.html#referring-to-list-variables
The error message indicates that the argument subnets for the aws_alb resource expects elements of type string in its list type. The error indicates you provided a value for the argument that is not of type list(string). Although the value or type for the variable alb_subnets is not provided in the question, it can be assumed it is either a list or map given the name of the variable is plural. Assuming it is a list, you are casting it as a list(list(any)) when you specify it in your config as:
["${var.alb_subnets}"]
Deconstructing this, the [] specifies a list, and the variable is already a list. The elements of the variable are not provided in the question, but they can be assumed to be any without sacrificing accuracy.
Instead of specifying a nested list by wrapping the variable inside another list syntax with [], you can remove the outer brackets and:
resource aws_alb "alb" {
name = "ecs-${var.app_name}"
internal = "${var.internal}"
subnets = var.alb_subnets
security_groups = ["${var.security_groups}"]
count = "${var.do_module}"
}
will be a list(any) for the argument value. If your elements of alb_subnets are not strings, then you will have to fix that also to ensure the proper type of list(string) for the argument.

converting a python string to a variable

I have read almost every similar question but none of them seems to solve my current problem.
In my python code, I am importing a string from my bashrc and in the following, I am defining the same name as a variable to index my dictionary. Here is the simple example
obs = os.environ['obs']
>> obs = 'id_0123'
id_0123 = numpy.where(position1 == 456)
>> position1[id_0123] = 456
>> position2[id_0123] = 789
But of course, when I do positions[obs], it throws an error since it is a string rather than an index (numpy.int64). So I have tried to look for a solution to convert my string into a variable but all solution suggesting to either convert into a dictionary or something else and assign the string to an integer, But I can not do that since my string is dynamic and will constantly change. In the end, I am going to have about 50 variables and I need to check the current obs corresponding to which variable, so I could use it as indices to access the parameters.
Edit:
Position1 and Position 2 are just bumpy arrays, so depending on the output of os.environ (which is 'id_0123' in this particular case), they will print an array element. So I can not assign 'id_0123' another string or number since I am using that exact name as a variable.
The logic is that there are many different arrays, I want to use the output of os.environ as an input to access the element of these arrays.
If you wanted to use a dictionary instead, this would work.
obs = 'id_0123'
my_dict = {obs: 'Tester'}
print (my_dict [obs])
print (my_dict ['id_0123'])
You could use the fact that (almost) everything is a dictionary in Python to create storage container that you index with obs:
container = lambda:None
container.id_0123 = ...
...
print(positions[container.__dict__[obs]])
Alternatively, you can use globals() or locals() to achieve the desired behavior:
import numpy
import os
def environment_to_variable(environment_key='obs', variable_values=globals()):
# Get the name of the variable we want to reference from the bash environment
variable_name = os.environ[environment_key]
# Grab the variable value that matches the named variable from the environment. Variable you want to reference must be visible in the global namespace by default.
variable_value = variable_values[variable_name]
return variable_value
positions = [2, 3, 5, 7, 11, 13]
id_0123 = 2 # could use numpy.where here
id_0456 = 4
index = environment_to_variable(variable_values=locals())
print(positions[index])
If you place this in a file called a.py, you can observe the following:
$ obs="id_0123" python ./a.py
5
$ obs="id_0456" python ./a.py
11
This allows you to index the array differently at runtime, which is what it seems like your intention is.

How do I get outputs for multiple modules?

I have a module I call multiple times in a single tf file. One of the things it does is create an S3 bucket. I have this defined in its output:
output "mybucket" {
value = "${aws_s3_bucket.mybucket.id}"
}
In order to view this output though, because I'm using modules, is scope to the specific module which means doing this:
terraform output -module=module1 mybucket
Which means if I just want a list of ALL the buckets created via the tf file I have to loop over them programmatic. Sadly wildcards do not work:
terraform output -module=* mybucket
So now, how do I do this? I could loop over all the modules and call output multiple times but I can't find a terraform command that lists all the modules currently in use.
With state list I get the modules names but in a format I have to parse:
terraform state list aws_s3_bucket.mybucket
module.module1.aws_s3_bucket.mybucket
module.module2.aws_s3_bucket.mybucket
Is there a way to query state and retrieve all the buckets that were created OR a way to view outputs of all modules?
Edit: So it seems I can output a list from the tf file that calls the modules. The tf file collects the output of multiple module calls into a list:
module "mod1" {..do stuff, create s3 bucket, output{mybucket} ...}
module "mod2" {..do stuff, create s3 bucket, output{mybucket} ...}
module "mod3" {..do stuff, create s3 bucket, output{mybucket} ...}
output "my-buckets" {
value = ["${module.mod1.mybucket}","${module.mod2.mybucket}"]
}
The disappointing thing though is I have to manual enter each modules by name. The splat operator didn't work here to expand the modules- am I using it wrong?:
output "my-buckets" {
value = ["${module.*.mybucket}"]
}
The error I get is:
* output 'my-buckets': unknown module referenced: *
output 'my-buckets': undefined module referenced *
Also following up with #ydaetskcoR's answer, the splat notation for your case would look somewhat like:
output "output-s3-buckets" {
value = "${aws_s3_bucket.mybuckets.*.id}"
description = "List of buckets created"
}
Which can be read at the module declaration level as
list_of_buckets = ["${modules.s3_module.output-s3-buckets}"]

vpc_zone_identifier should be a list

I'm not getting my head around this. When doing a terraform plan it complains the value should be a list. Fair enough. Let's break this down in steps.
The error
1 error(s) occurred:
* module.instance-layer.aws_autoscaling_group.mariadb-asg: vpc_zone_identifier: should be a list
The setup
The VPC and subnets are created with terraform in another module.
The outputs of that module give the following:
"subnets_private": {
"sensitive": false,
"type": "string",
"value": "subnet-1234aec7,subnet-1234c8a7"
},
In my main.tf I use the output of said module to feed it into a variable for my module that takes care of the auto scaling groups:
subnets_private = "${module.static-layer.subnets_private}"
This is used in the module to require the variable:
variable "subnets_private" {}
And this is the part where I configure the vpc_zone_identifier:
Attempt: split
resource "aws_autoscaling_group" "mariadb-asg" {
vpc_zone_identifier = "${split(",",var.subnets_private)}"
Attempt: list
resource "aws_autoscaling_group" "mariadb-asg" {
vpc_zone_identifier = "${list(split(",",var.subnets_private))}"
Question
The above attempt with the list(split( should in theory work. Since terraform complains but doesn't print the actual value it's quite hard to debug. Any suggestions are appreciated.
Filling in the value manually works.
When reading the documentation very carefully it appears the split is not spitting out clean elements that afterwards can be put into a list.
They suggest to wrap brackets around the string ([" xxxxxxx "]) so terraform picks it up as a list.
If my logic is correct that means
subnet-1234aec7,subnet-1234c8a7 is outputted as subnet-1234aec7","subnet-1234c8a7 (note the quotes), assuming the quotes around the delimiter of the split command have nothing to do with this.
Here is the working solution
vpc_zone_identifier = ["${split(",",var.subnets_private)}"]
For the following helps:
vpc_zone_identifier = ["${data.aws_subnet_ids.all.ids}"]

Resources