I am trying to manage my github organisation using terraform and wanted to implement a team structure.
I have defined the team structure in a map as below:
variable "teams" {
description = "Map of teams with members"
type = "map"
default = {
"TeamA" = ["abc", "xyz", "pqr", "mno"]
"TeamB" = ["abc", "xyz", "mno"]
"TeamC" = ["pqr"]
}
}
I am able to create these teams using following resource code:
resource "github_team" "sub-teams" {
count = "${length(keys(var.teams))}"
name = "${element(keys(var.teams), count.index)} Team"
description = "${element(keys(var.teams), count.index)} team"
privacy = "closed"
}
Now the ask is loop over keys of map and add corresponding team members to the respective teams. How should I achieve this requirement?
I referred this one, but looks like it has both the list constant as against of this said scenario.
Nested Maps are not yet supported by terraform.
You will need to use the variables inside the map rather using arrays. Below link will take you to the git issue page.
https://github.com/hashicorp/terraform/issues/2114
Related
I have multiple aws_glue_catalog_table resources and I want to create a single output that loops over all resources to show the S3 bucket location of each one. The purpose of this is to test if I am using the correct location (because it is a concatenation of variables) for each resource in Terratest. I cannot use aws_glue_catalog_table.* or aws_glue_catalog_table.[] because Terraform does not allow to reference a resource without specifying its name.
So I created a variable "table_names" with r1, r2, rx. Then, I can loop over the names. I want to create the string aws_glue_catalog_table.r1.storage_descriptor[0].location dynamically, so I can check if the location is correct.
resource "aws_glue_catalog_table" "r1" {
name = "r1"
database_name = var.db_name
storage_descriptor {
location = "s3://${var.bucket_name}/${var.environment}-config/r1"
}
...
}
resource "aws_glue_catalog_table" "rX" {
name = "rX"
database_name = var.db_name
storage_descriptor {
location = "s3://${var.bucket_name}/${var.environment}-config/rX"
}
}
variable "table_names" {
description = "The list of Athena table names"
type = list(string)
default = ["r1", "r2", "r3", "rx"]
}
output "athena_tables" {
description = "Athena tables"
value = [for n in var.table_names : n]
}
First attempt: I tried to create an output "athena_tables_location" with the syntax aws_glue_catalog_table.${table} but does does.
output "athena_tables_location" {
// HOW DO I ITERATE OVER ALL TABLES?
value = [for t in var.table_names : aws_glue_catalog_table.${t}.storage_descriptor[0].location"]
}
Second attempt: I tried to create a variable "table_name_locations" but IntelliJ already shows an error ${t} in the for loop [for t in var.table_names : "aws_glue_catalog_table.${t}.storage_descriptor[0].location"].
variable "table_name_locations" {
description = "The list of Athena table locations"
type = list(string)
// THIS ALSO DOES NOT WORK
default = [for t in var.table_names : "aws_glue_catalog_table.${t}.storage_descriptor[0].location"]
}
How can I list all table locations in the output and then test it with Terratest?
Once I can iterate over the tables and collect the S3 location I can do the following test using Terratest:
athenaTablesLocation := terraform.Output(t, terraformOpts, "athena_tables_location")
assert.Contains(t, athenaTablesLocation, "s3://rX/test-config/rX",)
It seems like you have an unusual mix of static and dynamic here: you've statically defined a fixed number of aws_glue_catalog_table resources but you want to use them dynamically based on the value of an input variable.
Terraform doesn't allow dynamic references to resources because its execution model requires building a dependency graph between all of the objects, and so it needs to know which exact resources are involved in a particular expression. However, you can in principle build your own single value that includes all of these objects and then dynamically choose from it:
locals {
tables = {
r1 = aws_glue_catalog_table.r1
r2 = aws_glue_catalog_table.r2
r3 = aws_glue_catalog_table.r3
# etc
}
}
output "table_locations" {
value = {
for t in var.table_names : t => local.tables[t].storage_descriptor[0].location
}
}
With this structure Terraform can see that output "table_locations" depends on local.tables and local.tables depends on all of the relevant resources, and so the evaluation order will be correct.
However, it also seems like your table definitions are systematic based on var.table_names and so could potentially benefit from being dynamic themselves. You could achieve that using the resource for_each feature to declare multiple instances of a single resource:
variable "table_names" {
description = "Athena table names to create"
type = set(string)
default = ["r1", "r2", "r3", "rx"]
}
resource "aws_glue_catalog_table" "all" {
for_each = var.table_names
name = each.key
database_name = var.db_name
storage_descriptor {
location = "s3://${var.bucket_name}/${var.environment}-config/${each.key}"
}
...
}
output "table_locations" {
value = {
for k, t in aws_glue_catalog_table.all : k => t.storage_descriptor[0].location
}
}
In this case aws_glue_catalog_table.all represents all of the tables together as a single resource with multiple instances, each one identified by the table name. for_each resources appear in expressions as maps, so this will declare resource instances with addresses like this:
aws_glue_catalog_table.all["r1"]
aws_glue_catalog_table.all["r2"]
aws_glue_catalog_table.all["r3"]
...
Because this is already a map, this time we don't need the extra step of constructing the map in a local value, and can instead just access this map directly to build the output value, which will be a map from table name to storage location:
{
r1 = "s3://BUCKETNAME/ENVNAME-config/r1"
r2 = "s3://BUCKETNAME/ENVNAME-config/r2"
r3 = "s3://BUCKETNAME/ENVNAME-config/r3"
# ...
}
In this example I've assumed that all of the tables are identical aside from their names, which I expect isn't true in practice but I was going only by what you included in the question. If the tables do need to have different settings then you can change var.table_names to instead be a variable "tables" whose type is a map of object type where the values describe the differences between the tables, but that's a different topic kinda beyond the scope of this question, so I won't get into the details of that here.
I am trying to create a map of secondary ranges for the GCP VPC module here and have the following defined in my locals:
secondary_ranges = {
for name, config in var.subnet_config : config.subnet_name => [
{
range_name = local.ip_range_pods
ip_cidr_range = "10.${index(keys(var.subnet_config), name)}.0.0/17"
},
{
range_name = local.ip_range_services
ip_cidr_range = "10.${index(keys(var.subnet_config), name)}.128.0/17"
}
]
}
subnet_config is defined as follows:
subnet_config = {
cluster1 = {
region = "us-east1"
subnet_name = "default"
},
cluster2 = {
region = "us-west1"
subnet_name = "default"
}
}
This creates the secondary subnets just fine if the subnet names are unique but fails with the error below if the subnet names (which end up being the key values) are not unique:
Two different items produced the key "default" in this 'for' expression. If duplicates are expected, use the ellipsis (...) after the value expression to enable grouping by key.
I'm trying to figure out if I can use grouping mode if the value is a list and if so, how?
Any help would be greatly appreciated.
If you use the grouping mode in this case then it would be to group the outermost for expression, which is producing a map, because that's the one whose keys you'd be grouping by.
We can start by adding the grouping mode modifier to that and see what happens:
secondary_ranges_pairs = {
for name, config in var.subnet_config : config.subnet_name => [
{
range_name = local.ip_range_pods
ip_cidr_range = "10.${index(keys(var.subnet_config), name)}.0.0/17"
},
{
range_name = local.ip_range_services
ip_cidr_range = "10.${index(keys(var.subnet_config), name)}.128.0/17"
}
]...
}
The effect of the expression above would be to create a map of lists of lists of objects, where the deepest lists are each pairs of objects because of how your inner for expression is written.
To turn that into the map of lists of objects which I think you're hoping for, you can then use flatten in a separate step:
secondary_ranges = {
for k, pairs in local.secondary_ranges_pairs : k => flatten(pairs)
}
flatten recursively walks a data structure where there are lists of lists and concatenates all of the nested lists together into a single flat list.
A word of caution: you seem to be using a lexical sort of the subnet_config keys in order to derive network numbering. That means that if you add new elements to your var.subnet_config whose keys sort earlier than any existing ones (for example, if you were to add in a cluster0 into what you showed in your question) then you'll implicitly renumber all of the subsequent networks, which is likely to cause a lot of churn recreating objects, and the change might not even be possible if those networks contain other objects.
I'd typically recommend instead being explicit about what number you've assigned to each network, by including then as part of the var.subnet_config objects. You can then clearly see which numbers you've assigned and make sure that any new networks will always be assigned a later number without disturbing any existing assignments.
There's also an official Terraform module hashicorp/subnets/cidr which aims to encapsulate subnet numbering calculations. The design of that module means that it wouldn't be completely straightforward to adopt it for your use-case (since you're allocating two levels of subnet at once) but it might be useful to study to see whether any of the design tradeoffs made there are relevant to your module.
Currently, I'm working on a requirement to make Terraform Tags for AWS resources more modular. In this instance, there will be one tag 'Function' that will be unique to each resource and the rest of the tags to be attached will apply to all resources. What I'm trying to do is combine the unique 'Function' value with the other tags for each resource.
Here's what I've got so far:
tags = {
Resource = "Example",
"${var.tags}
This tags value is defined as a map in the variables.tf file like so:
variable "tags" {
type = map
description = "Tags for infrastructure resources."
}
and populated in the tfvars file with:
tags = {
"Product" = "Name",
"Application" = "App",
"Owner" = "Email"
}
When I run TF Plan, however, I'm getting an error:
Expected an attribute value, introduced by an equals sign ("=").
How can variables be combined like this in Terraform? Thanks in advance for your help.
Figured this one out after further testing. Here you go:
tags = "${merge(var.tags,
map("Product", "Product Name",
"App", "${var.environment}")
)
}"
So, to reiterate: this code will merge a map variable of tags that (in my case) are applicable to many resources with the tag (Product and App) that are unique to each infrastructure resource. Hope this helps someone in the future. Happy Terraforming.
I tried to use map, it does work with new versions.
The lines below works for me:
tags = "${merge(var.resource_tags, {a="bb"})}"
Creating values in my tfvars file did not work for me...
Here is my approach....
I created a separate variable in my variables.tf file to call during the tagging process..
my default variable for tags are imported/pass from a parent module.
So therefore it doesnt need to specify any default data.
the extra tagging in the child module is done in the sub_tags variable..
imported/passed from parent/root module
variable "tags" {
type = "map"
}
tags in the child module
variable "sub_tags"{
type = "map"
default = {
Extra_Tags_key = "extra tagging value"
}
}
in the resource that needs the extra tagging.. i call it like this
tags = "${merge(var.tags, var.sub_tags)}"
this worked great for me
I am creating a series of resources in terraform (in this case, dynamo DB table). I want to apply IAM policies to subgroups of them. E.g.
resource "aws_dynamodb_table" "foo" {
count = "${length(var.tables)}"
name = "foo-${element(var.tables,count.index)}"
tags {
Name = "foo-${element(var.tables,count.index)}"
Environment = "<unsure how to get this>"
Source = "<unsure how to get this>"
}
}
All of these share some common element, e.g. var.sources is a list composed of the Cartesian product of var.environments and var.sources:
environments = ["dev","qa","prod"]
sources = ["a","b","c"]
So:
tables = ["a:dev","a:qa","a:prod","b:dev","b:qa","b:prod","c:dev","c:qa","c:prod"]
I want to get the arns of the created dynamo tables that have, e.g. c (i.e. those with the name ["c:dev","c:qa","c:prod"]) or prod(i.e. those with the name ["a:prod","b:prod","c:prod"]).
Is there any sane way to do this with terraform 0.11 (or even 0.12 for that matter)?
I am looking to:
group the dynamo db table resources by some of the inputs (environment or source) so I can apply some policy to each group
Extract the input for each created one so I can apply the correct tags
I was thinking of, potentially, instead of creating the cross-product list, to create maps for each input:
{
"a": ["dev","qa","prod"],
"b": ["dev","qa","prod"],
"c": ["dev","qa","prod"]
}
or
{
"dev": ["a","b","c"],
"qa": ["a","b","c"],
"prod": ["a","b","c"]
}
It would make it easy to find the target names for each one, since I can look up by the input, but that only gives me the names, but not make it easy to get the actual resources (and hence the arns).
Thanks!
A Terraform 0.12 solution would be to derive the cartesian product automatically (using setproduct) and use a for expression to shape it into a form that's convenient for what you need. For example:
locals {
environments = ["dev", "qa", "prod"]
sources = ["a", "b", "c"]
tables = [for pair in setproduct(local.environments, local.sources) : {
environment = pair[0]
source = pair[1]
name = "${pair[1]}:${pair[0]}"
})
}
resource "aws_dynamodb_table" "foo" {
count = length(local.tables)
name = "foo-${local.tables[count.index].name}"
tags {
Name = "foo-${local.tables[count.index].name}"
Environment = local.tables[count.index].environment
Source = local.tables[count.index].source
}
}
At the time I write this the resource for_each feature is still in development, but in a near-future Terraform v0.12 minor release it should be possible to improve this further by making these table instances each be identified by their names, rather than by their positions in the local.tables list:
# (with the same "locals" block as in the above example)
resource "aws_dynamodb_table" "foo" {
for_each = { for t in local.tables : t.name => t }
name = "foo-${each.key}"
tags {
Name = "foo-${each.key}"
Environment = each.value.environment
Source = each.value.source
}
}
As well as cleaning up some redundancy in the syntax, this new for_each form will cause Terraform to identify this instances with addresses like aws_dynamodb_table.foo["a:dev"] instead of aws_dynamodb_table.foo[0], which means that you'll be able to freely add and remove members of the two initial lists without causing churn and replacement of other instances because the list indices changed.
This sort of thing would be much harder to achieve in Terraform 0.11. There are some general patterns that can help translate certain 0.12-only constructs to 0.11-compatible features, which might work here:
A for expression returning a sequence (one with square brackets around it, rather than braces) can be simulated with a data "null_data_source" block with count set, if the result would've been a map of string values only.
A Terraform 0.12 object in a named local value can in principle be replaced with a separate simple map of local value for each object attribute, using a common set of keys in each map.
Terraform 0.11 does not have the setproduct function, but for sequences this small it's not a huge problem to just write out the cartesian product yourself as you did in the question here.
The result will certainly be very inelegant, but I expect it's possible to get something working on Terraform 0.11 if you apply the above ideas and make some compromises.
I need to create 6 subnets with below cidr value but it's order has been changed while creating it with terraform.
private_subnets = {
"10.1.80.0/27" = "x"
"10.1.80.32/27" = "x"
"10.1.80.64/28" = "y"
"10.1.80.80/28" = "y"
"10.1.80.96/27" = "z"
"10.1.80.128/27" = "z"
}
Terraform is creating with 10.1.80.0/27 , 10.1.80.128/27,10.1.80.32/27,10.1.80.64/28,10.1.80.80/28,10.1.80.96/27 order
Module of terraform:
resource "aws_subnet" "private" {
vpc_id = "${var.vpc_id}"
cidr_block = "${element(keys(var.private_subnets), count.index)}"
availability_zone = "${element(var.availability_zones, count.index)}"
count = "${length(var.private_subnets)}"
tags {
Name = "${lookup(var.private_subnets, element(keys(var.private_subnets), count.index))}
}
}
Updated Answer:
Thanks to the discussion in the comments, I revise my answer:
You are assuming an order within a dictionary. This is not intended behaviour. As from your example, one can see that terraform orders the keys alphabetically internally, i.e., you can "think" of your variable as
private_subnets = {
"10.1.80.0/27" = "x"
"10.1.80.128/27" = "z"
"10.1.80.32/27" = "x"
"10.1.80.64/28" = "y"
"10.1.80.80/28" = "y"
"10.1.80.96/27" = "z"
}
You are running into problems, because you are having mismatches with your other variable var.availability_zones where you assume the index to be sorted the same as for var.private_subnets.
Relying on the above ordering (alphabetically), is not a good solution, since it may change with any version of terraform (order of keys is not guaranteed).
Hence, I propose to use a list of maps:
private_subnets = [
{
"cidr" = "10.1.80.0/27"
"name" = "x"
"availability_zone" = 1
},
{
"cidr" = "10.1.80.32/27"
"name" = "x"
"availability_zone" = 2
},
…
]
I encoded the availability zone as index of your var.availability_zones list. However, you could also consider using the availability zone directly.
The adaption of your code is straightforward: Get (element(…)) the list element to get the map and then lookup(…) the desired key.
Old Answer (not applicable here):
Before Terraform creates any resources, it creates a graphstructure to represent all the objects it wants to track (create, update, delete) and the dependencies upon one another.
In your example, 6 different aws_subnet objects are created in the graph which do not depend on each other (there is no variable in one subnet dependent on another subnet).
When Terraform now tries to create the attributes, it does so concurrently in (potentially) multiple threads and creates resources potentially simultaniously, if they do not depend on each other.
This is why you might see very different orders of execution within multiple runs of terraform.
Note that this is a feature, since if you have many resources to be created that have no dependency on each other, they all are created simultaneously saving a lot of time with long-running creation operations.
A solution to your problem is to explicitly model the dependencies you are thinking of. Why should one subnet be created before the other? And if so, how can you make them dependent (e.g. via depends_on parameter)?
Answering this questions should bring you into the right direction to model your code according to your required layout.