How to pass list of s3 arns inside the terraform data resource aws_iam_policy_document - terraform

I am trying to pass multiple values to pricipals's identifiers in the data resource "aws_iam_policy_document". getting the following error
Inappropriate value for attribute "identifiers": element 0: string required.
s3_values variable is defined type = any and set the values as
....
s3_values:
bucket: bucketname1
s3_arns:
- arn:aws:iam::1234567890:root
- arn:aws:iam::2345678901:role/s3-read-role
data "aws_iam_policy_document" "s3_policy" {
count = length(var.s3_arns)
statement {
sid = "1"
effect = "Allow"
principals {
type = "AWS"
identifiers = ["${var.s3_values[count.index]["s3_arns"]}"]
}
actions = ["s3:PutObject"]
resources = ["arn:aws:s3:::${var.s3_values[count.index]["bucket"]}/*"]
}
}
I get the following error
Inappropriate value for attribute "identifiers": element 0: string required.
its working , when only one value is passed , but not working when we pass multiple values to the variable s3_arns.

It looks like you're trying to create multiple policy documents for a single S3 bucket. Rather than using count to create many documents, it would be best if you created a single policy document that gives access to each ARN you pass.
Currently it works for one ARN because the identifiers field gets passed a single string and creates a list with one string element. When you pass a list of ARNs, the identifiers field is instead creating a list with a list element that contains the ARN strings.
I would fix this by making the s3_arns field always be a list of strings, and removing the count field on the data resource. Once you do that you can change the line identifiers to be identifiers = var.s3_values.s3_arns and the resources line to be resources = ["arn:aws:s3:::${var.s3_values.bucket}/*"]

Related

Can you retrieve an item from a list using regex in Terraform?

The problem I am trying to solve is I need to identify one of the Azure subnets in a virtual network by part of it's name. This is so I can then later retrieve it's CIDR. I only know part beforehand such as "mgmt-1" or "egress-1". The actual name of the subnet is much longer but will end in something like that. This was my process:
I have the vnet name so I pull all subnets:
data "azurerm_virtual_network" "this" {
name = local.vnet
resource_group_name = "myrg"
}
Now what I wish I could do is this:
locals {
mgmt_index = index(data.azurerm_virtual_network.this.subnets, "*mgmt-1")
mgmt_subnet = data.azurerm_virtual_network.this.subnets[local.mgmt_index]
}
However index wants an exact match, not a regex. Is this possible to do? Perhaps a better way?
Thank you,
It is not possible to directly look up a list item using a regex match, but you can use for expressions to apply arbitrary filters to a collection when constructing a new collection:
locals {
mgmt_subnets = toset([
for s in data.azurerm_virtual_network.this.subnets : s
if can(regex(".*?mgmt-1", s.name))
])
}
In principle an expression like the above could match more than one object, so I wrote this to produce a set of objects that match.
If you expect that there will never be more than one object whose name matches the pattern then you can use Terraform's one function to assert that and then Terraform will check to confirm that there's no more than one element (returning an error if not) and then return that one value.
locals {
mgmt_subnet = one([
for s in data.azurerm_virtual_network.this.subnets : s
if can(regex(".*?mgmt-1", s.name))
])
}
If the condition doesn't match any of the subnet objects then in the first case you'll have an empty set and in the second case you'll have the value null.

How to combine and sort key-value pair in Terraform

since the last update of the Logicmonitor provider in Terraform we're struggling with a sorting isse.
In LogicMonitor the properties of a device are a name-value pair, and they are presented alfabetically by name. Also in API requests the result is alphabetical. So far nothing fancy.
But... We build our Cloud devices using a module. Calling the module we provide some LogicMonitor properties specially for this device, and a lot more are provided in the module itself.
In the module this looks like this:
`
custom_properties = concat([
{
name = "host_fqdn"
value = "${var.name}.${var.dns_domain}"
},
{
name = "ocid"
value = oci_core_instance.server.id
},
{
name = "private_ip"
value = oci_core_instance.server.private_ip
},
{
name = "snmp.version"
value = "v2c"
}
],
var.logicmonitor_properties)
`
The first 4 properties are from the module and combined with anyting what is in var.logicmonitor_properties. On the creation of the device in LogicMonitor all properties are set in the order the are and no problem.
The issue arises when there is any update on a terraform file in this environment. Due to the fact the properties are presented in alphabetical order, Terraform is showing a lot of changes if finds (but which are in fact just a mixed due to sorting).
The big question is: How can I sort the complete list of properties bases on the "name".
Tried to work with maps, sort and several other functions and examples, but got nothing working on key-value pairs. Merging single key's works fine in a map, but how to deal with name/value pairs/
I think you were on the right track with maps and sorting. Terraform maps do not preserve any explicit ordering themselves, and so whenever Terraform needs to iterate over the elements of a map in some explicit sequence it always do so by sorting the keys lexically (by Unicode codepoints) first.
Therefore one answer is to project this into a map and then project it back into a list of objects again. The projection back into list of objects will implicitly sort the map elements by their keys, which I think will get the effect you wanted.
variable "logicmonitor_properties" {
type = list(object({
name = string
value = string
}))
}
locals {
base_properties = tomap({
host_fqdn = "${var.name}.${var.dns_domain}"
ocid = oci_core_instance.server.id
private_ip = oci_core_instance.server.private_ip
"snmp.version" = "v2c"
})
extra_properties = tomap({
for prop in var.logicmonitor_properties : prop.name => prop.value
})
final_properties = merge(local.base_properties, local.extra_properties)
# This final step will implicitly sort the final_properties
# map elements by their keys.
final_properties_list = tolist([
for k, v in local.final_properties : {
name = k
value = v
}
])
}
With all of the above, local.final_properties_list should be similar to the custom_properties structure you showed in your question except that the elements of the list will be sorted by their names.
This solution assumes that the property names will be unique across both base_properties and extra_properties. If there are any colliding keys between both of those maps then the merge function will prefer the value from extra_properties, overriding the element of the same key from base_properties.
First, use the sort() function to sort the keys in alphabetical order:
sorted_keys = sort(keys(var.my_map))
Next, use the map() function to create a new map with the sorted keys and corresponding values:
sorted_map = map(sorted_keys, key => var.my_map[key])
Finally, you can use the jsonencode() function to print the sorted map in JSON format:
jsonencode(sorted_map)```

How do I pick elements from a terraform list

I am creating a series of resources in terraform (in this case, dynamo DB table). I want to apply IAM policies to subgroups of them. E.g.
resource "aws_dynamodb_table" "foo" {
count = "${length(var.tables)}"
name = "foo-${element(var.tables,count.index)}"
tags {
Name = "foo-${element(var.tables,count.index)}"
Environment = "<unsure how to get this>"
Source = "<unsure how to get this>"
}
}
All of these share some common element, e.g. var.sources is a list composed of the Cartesian product of var.environments and var.sources:
environments = ["dev","qa","prod"]
sources = ["a","b","c"]
So:
tables = ["a:dev","a:qa","a:prod","b:dev","b:qa","b:prod","c:dev","c:qa","c:prod"]
I want to get the arns of the created dynamo tables that have, e.g. c (i.e. those with the name ["c:dev","c:qa","c:prod"]) or prod(i.e. those with the name ["a:prod","b:prod","c:prod"]).
Is there any sane way to do this with terraform 0.11 (or even 0.12 for that matter)?
I am looking to:
group the dynamo db table resources by some of the inputs (environment or source) so I can apply some policy to each group
Extract the input for each created one so I can apply the correct tags
I was thinking of, potentially, instead of creating the cross-product list, to create maps for each input:
{
"a": ["dev","qa","prod"],
"b": ["dev","qa","prod"],
"c": ["dev","qa","prod"]
}
or
{
"dev": ["a","b","c"],
"qa": ["a","b","c"],
"prod": ["a","b","c"]
}
It would make it easy to find the target names for each one, since I can look up by the input, but that only gives me the names, but not make it easy to get the actual resources (and hence the arns).
Thanks!
A Terraform 0.12 solution would be to derive the cartesian product automatically (using setproduct) and use a for expression to shape it into a form that's convenient for what you need. For example:
locals {
environments = ["dev", "qa", "prod"]
sources = ["a", "b", "c"]
tables = [for pair in setproduct(local.environments, local.sources) : {
environment = pair[0]
source = pair[1]
name = "${pair[1]}:${pair[0]}"
})
}
resource "aws_dynamodb_table" "foo" {
count = length(local.tables)
name = "foo-${local.tables[count.index].name}"
tags {
Name = "foo-${local.tables[count.index].name}"
Environment = local.tables[count.index].environment
Source = local.tables[count.index].source
}
}
At the time I write this the resource for_each feature is still in development, but in a near-future Terraform v0.12 minor release it should be possible to improve this further by making these table instances each be identified by their names, rather than by their positions in the local.tables list:
# (with the same "locals" block as in the above example)
resource "aws_dynamodb_table" "foo" {
for_each = { for t in local.tables : t.name => t }
name = "foo-${each.key}"
tags {
Name = "foo-${each.key}"
Environment = each.value.environment
Source = each.value.source
}
}
As well as cleaning up some redundancy in the syntax, this new for_each form will cause Terraform to identify this instances with addresses like aws_dynamodb_table.foo["a:dev"] instead of aws_dynamodb_table.foo[0], which means that you'll be able to freely add and remove members of the two initial lists without causing churn and replacement of other instances because the list indices changed.
This sort of thing would be much harder to achieve in Terraform 0.11. There are some general patterns that can help translate certain 0.12-only constructs to 0.11-compatible features, which might work here:
A for expression returning a sequence (one with square brackets around it, rather than braces) can be simulated with a data "null_data_source" block with count set, if the result would've been a map of string values only.
A Terraform 0.12 object in a named local value can in principle be replaced with a separate simple map of local value for each object attribute, using a common set of keys in each map.
Terraform 0.11 does not have the setproduct function, but for sequences this small it's not a huge problem to just write out the cartesian product yourself as you did in the question here.
The result will certainly be very inelegant, but I expect it's possible to get something working on Terraform 0.11 if you apply the above ideas and make some compromises.

Unable to fetch terraform list variables dynamically

I have a list variable "test" in variables.tf. I am trying to use this list variable inside my zone.tf .
I do not want to use list indexes here infact I want to run a loop to get all the values of the list from list variable dynamically. How can I accomplish this ? Any help is much appreciated.
I have tried to use count in test.tf inside resource resource "aws_route53_record" but it creates multiple record sets which I do not want as I just need a single record set with multiple records
resource "aws_route53_record" "test" {
zone_id = "${data.aws_route53_zone.dns.zone_id}"
name = "${lower(var.environment)}xyz"
type = "CAA"
ttl = 300
count = "${length(var.test)}"
records = [
"0 issue \"${element(var.test, count.index)}\"",
]
}
variables.tf :-
variable "test" {
type = "list"
default = ["godaddy.com", "Namecheap.org"]
}
zone.tf :-
resource "aws_route53_record" "test" {
zone_id = "${data.aws_route53_zone.dns.zone_id}"
name = "${lower(var.environment)}xyz"
type = "CAA"
ttl = 300
records = [
"0 issue \"${var.test[0]}\"",
"0 issue \"${var.test[1]}\"",
]
}
Expecting to get the one record set with two records.
Actual :- getting Two record sets with two records.
so if I am understanding correctly you want to associate two records with your zone, but right now when using count you are getting two zones with one record.
This is because by specifying county terraform will create the resource that has the count attribute on it equal to the number of count.
The issue is fundamentally that right now you have a list variable and are trying to pass it to where a list is expected by extracting each individual element of the list to put back element by element into the list attribute.
Rather than go through that additional work an easier solution would be to just add the additional parts of the string, the "0 issue" part to the definition of the variable and then just pass the whole list object in as shown below
variable "test" {
type = "list"
default = ["0 issue godaddy.com", "0 issue Namecheap.org"]
}
zone.tf :-
resource "aws_route53_record" "test" {
zone_id = "${data.aws_route53_zone.dns.zone_id}"
name = "${lower(var.environment)}xyz"
type = "CAA"
ttl = 300
records = ["${var.test}"]
}
This will then pass the list in for that attribute and terraform will take care of the marshaling and unmarshaling and handling of the list. I hope this answers your question.

Handling Terraform AMI looking returning an empty list

Is there a better way than the following to handle a Terraform data resource aws_ami_ids returning an empty list?
Always want the module to return the latest AMI's ID if found.
If the list was empty I was getting a "list "data.aws_ami_ids.full_unencrypted_ami.ids" does not have any elements so cannot determine type." error, so this was the workaround.
data "aws_ami_ids" "full_unencrypted_ami" {
name_regex = "${var.ami_unencrypted_regex}"
owners = ["123456789","self"]
}
locals {
notfound = "${list("AMI Not Found")}"
unencrypted_ami = "${concat(data.aws_ami_ids.full_unencrypted_ami.ids,local.notfound)}"
}
output "full_ami_unencrypted_id" {
description = "Full Unencrypted AMI ID"
value = "${local.full_unencrypted_ami[0]}"
}
1) Use aws_ami_id instead of aws_ami_ids so that terraform apply fails if the AMI is gone, forcing you to update your Terraform solution.
OR
2) Create two aws_ami_ids data sources (the second being a fallback), concat the results and take the first item. But, as ydaetskcoR hinted at, why would you want this implicit (possibly undetected) fallback?

Resources