jsondecode fails when using for_each to pass variables to module - terraform

I'm trying to use for_each with a terraform module creating datadog synthetic tests. The object names in an s3 bucket are listed and passed as the set for the for_each. The module reads the content of each file using the each.value passed in by the calling module as the key. I hardcoded the s3 object key value in the module during testing and it was working. When I attempt to call the module from main.tf, passing in the key name dynamically from the set it fails with the below error.
│ Error: Error in function call
│
│ on modules\Synthetics\trial2.tf line 7, in locals:
│ 7: servicedef = jsondecode(data.aws_s3_object.tdjson.body)
│ ├────────────────
│ │ data.aws_s3_object.tdjson.body is ""
│
│ Call to function "jsondecode" failed: EOF.
main.tf
data "aws_s3_objects" "serviceList" {
bucket = "bucketname"
}
module "API_test" {
for_each = toset(data.aws_s3_objects.serviceList.keys)
source = "./modules/Synthetics"
S3key = each.value
}
module
data "aws_s3_object" "tdjson" {
bucket = "bucketname"
key = var.S3key
}
locals {
servicedef = jsondecode(data.aws_s3_object.tdjson.body)
Keys = [for k,v in local.servicedef.Endpoints: k]
}
Any clues as to what's wrong here?
Thanks

Check out the note on the aws_s3_object data source:
The content of an object (body field) is available only for objects which have a human-readable Content-Type (text/* and application/json). This is to prevent printing unsafe characters and potentially downloading large amount of data which would be thrown away in favour of metadata.
Since it's successfully getting the data source (not throwing an error), but the body is empty, this is very likely to be your issue. Make sure that your S3 object has the Content-Type metadata set to application/json. Here's a Stack Overflow question/answer on how to do that via the CLI; you can also do it via the AWS console, API, or Terraform (if you created the object via Terraform).
EDIT: I found the other issue. Check out the syntax for using for_each with toset:
resource "aws_iam_user" "the-accounts" {
for_each = toset( ["Todd", "James", "Alice", "Dottie"] )
name = each.key
}
The important bit is that you should be using each.key instead of each.value.

Related

Terraform loop through JSON array to create iam users

I have a simple json file containing a list of users and groups. From this list, I would like to create the users in AWS IAM but my for_each or merging syntax is wrong.
When running terraform plan, I get the following error:
Error: Error in function call
│
│ on locals.tf line 3, in locals:
│ 3: json_data = merge([for f in local.json_files : jsondecode(file("${path.module}/input/${f}"))]...)
│ ├────────────────
│ │ local.json_files is set of string with 1 element
│ │ path.module is "."
│
│ Call to function "merge" failed: arguments must be maps or objects, got "tuple".
How do I properly loop through the list (tuple) of objects in the JSON file?
JSON File sample:
[
{ "name": "user1", "groups": ["Admins", "DevOps"], "policies": [] },
{ "name": "user2", "groups": ["DevOps"], "policies": [] }
]
Terraform Code:
locals {
json_files = fileset("${path.module}/input/", "*.json")
json_data = merge([for f in local.json_files : jsondecode(file("${path.module}/input/${f}"))]...)
}
resource "aws_iam_user" "create_new_users" {
for_each = local.json_data
name = each.name
}
As a side note, I did manage to get the service to work by changing the JSON file to the following structure, but prefer to use the former:
{
"user1": {"groups": ["Admins","DevOps"],"policies": []},
"user2": {"groups": ["DevOps"],"policies": []}
}
and updating the aws_iam_user resource to:
resource "aws_iam_user" "create_new_users" {
for_each = local.json_data
name = each.key
}
The JSON document you showed is using a JSON array, which corresponds with the tuple type in Terraform, so it doesn't make sense to use merge for that result -- merge is for merging together maps, which would correspond most closely with an object in JSON. (and indeed, that's why your second example with an object containing a property with each user worked).
For sequence-like types (lists and tuples) there is a similar function concat which will append them together to produce a single longer sequence containing all of the items in the order given. You could use that function instead of merge to get a single list of all of the users as a starting point:
locals {
json_files = fileset("${path.module}/input/", "*.json")
json_data = concat([for f in local.json_files : jsondecode(file("${path.module}/input/${f}"))]...)
}
The resource for_each argument wants a mapping type though, so you'll need to do one more step to project this list of objects into a map of objects using the name attribute values as the keys:
resource "aws_iam_user" "create_new_users" {
for_each = { for u in local.json_data : u.name => u }
name = each.value.name
}
This will cause Terraform to identify each instance of the resource by the object's "name" property, and so with the sample input file you showed this will declare two instances of this resource with the following addresses:
aws_iam_user.create_new_users["user1"]
aws_iam_user.create_new_users["user2"]
(Note that it's unusual to name a Terraform resource using a verb. Terraform doesn't understand English grammar of course, so it doesn't really matter what you name it, but it's more typical to use a noun because this is only describing that a set of users should exist; you'll use this same object later to describe updating or destroying these objects. If this JSON document just represents all of your users then a more typical name might be aws_iam_user.all, since the resource type already says that these are users -- so there's no need to restate that -- and so all that's left to say is which users these are.)

Missing resource instance key when using for_each in terraform

I am creating multiple s3 buckets using for_each in terraform. Here he the code I am using
resource "aws_s3_bucket" "s3_private" {
for_each = var.git_repo_branch_env
bucket = each.value.override_domain_name == "" ? each.value.sitename_prefix == "" ? each.value.domain_name : join(".", [each.value.sitename_prefix, each.value.domain_name]) : each.value.sitename_prefix == "" ? each.value.override_domain_name : join(".", [each.value.sitename_prefix, each.value.override_domain_name])
force_destroy = true
}
I would like to set the ACL property for each of the buckets created, here is the code that I tried using
resource "aws_s3_bucket_acl" "s3_private_acl" {
bucket = aws_s3_bucket.s3_private.bucket
acl = "private"
}
I get the following error message with that
│ Error: Missing resource instance key │ │ on
../../modules/cloudfront-edge-auth-acp/main.tf line 149, in resource
"aws_s3_bucket_acl" "s3_private_acl": │ 149: bucket =
aws_s3_bucket.s3_private.bucket │ │ Because aws_s3_bucket.s3_private
has "for_each" set, its attributes must be │ accessed on specific
instances. │ │ For example, to correlate with indices of a referring
resource, use: │ aws_s3_bucket.s3_private[each.key]
I get that the error message is because I have a for_each on my bucket resource and I need to add the ACL property for each of the buckets. But I am unsure of how to add the ACL property to each of the buckets.
Question: How do I assign the ACL property to each of the buckets created using for_each?
If you created multiple buckets using for_each, same you need to do with ACLs:
resource "aws_s3_bucket_acl" "s3_private_acl" {
for_each = aws_s3_bucket.s3_private
bucket = each.value.bucket
acl = "private"
}
It's well explained in Terraform documentation: chaining for_each between resources.

Terraform Pass Cosmos Database Connection String to KeyVault

I have recently created a cosmos database in Terraform and I am trying to pass its database connection string as a secret in keyvault, but when doing this I get the following error:
Error: Incorrect attribute value type │ │ on keyvault.tf line 282, in resource "azurerm_key_vault_secret" "Authentication_Server_Cosmos_DB_ConnectionString": │ 282: value = azurerm_cosmosdb_account.nsauthsrvcosmosdb.connection_strings │ ├──────────────── │ │ azurerm_cosmosdb_account.nsauthsrvcosmosdb.connection_strings has a sensitive value │ │ Inappropriate value for attribute "value": string required.
I have also tried to use the sensitive argument but key vault does not like that argument also I cant find any documentation on how to do this. On the Terraform website it just has it listed as an attribute you can call on.
My Terraform Secret code is bellow, I wont put all my code in here as Stack overflow doesn't like the amount of code that I have.
So please presume, I am using the latest Azurerm agent, and all the rest of my code is correct its just the secret part that's not working.
resource "azurerm_key_vault_secret" "Authentication_Server_Cosmos_DB_ConnectionString" { //Auth Server Cosmos Connection String Secret
name = "AuthenticationServerCosmosDBConnectionString"
value = azurerm_cosmosdb_account.nsauthsrvcosmosdb.connection_strings
key_vault_id = azurerm_key_vault.nscsecrets.id
depends_on = [
azurerm_key_vault_access_policy.client,
azurerm_key_vault_access_policy.service_principal,
azurerm_cosmosdb_account.nsauthsrvcosmosdb,
]
}
There are 4 connection Strings inside the value that you have given and also the values are of type secure_string . So you need to convert them to String Value and apply index for which value you want to store in the keyvault.
For Storing all the the 4 Connection Strings you can use below :
resource "azurerm_key_vault_secret" "example" {
count = length(azurerm_cosmosdb_account.nsauthsrvcosmosdb.connection_strings)
name = "AuthenticationServerCosmosDBConnectionString-${count.index}"
value = tostring("${azurerm_cosmosdb_account.nsauthsrvcosmosdb.connection_strings[count.index]}")
key_vault_id = azurerm_key_vault.example.id
}
Outputs:
If you want to store only one connection string then you can use index as per your requirement (for example : if you want to store the first connection_string then use '0' as index and like wise 1/2/3 .) in the below code:
resource "azurerm_key_vault_secret" "example1" {
name = "AuthenticationServerCosmosDBConnectionString"
value = tostring("${azurerm_cosmosdb_account.nsauthsrvcosmosdb.connection_strings[0]}")
key_vault_id = azurerm_key_vault.example.id
}
Outputs:

"Variables may not be used here" during terraform init

I am using Terraform snowflake plugins. I want to use ${terraform.workspace} variable in terraform scope.
terraform {
required_providers {
snowflake = {
source = "chanzuckerberg/snowflake"
version = "0.20.0"
}
}
backend "s3" {
bucket = "data-pf-terraform-backend-${terraform.workspace}"
key = "backend/singlife/landing"
region = "ap-southeast-1"
dynamodb_table = "data-pf-snowflake-terraform-state-lock-${terraform.workspace}"
}
}
But I got this error. Variables are not available in this scope?
Error: Variables not allowed
on provider.tf line 9, in terraform:
9: bucket = "data-pf-terraform-backend-${terraform.workspace}"
Variables may not be used here.
Error: Variables not allowed
on provider.tf line 12, in terraform:
12: dynamodb_table = "data-pf-snowflake-terraform-state-lock-${terraform.workspace}"
Variables may not be used here.
Set backend.tf
terraform {
backend "azurerm" {}
}
Create a file backend.conf
storage_account_name = "deploymanager"
container_name = "terraform"
key = "production.terraform.tfstate"
Run:
terraform init -backend-config=backend.conf
The terraform backend docs state:
A backend block cannot refer to named values (like input variables, locals, or data source attributes).
However, the s3 backend docs show you how you can partition some s3 storage based on the current workspace, so each workspace gets its own independent state file. You just can't specify a distinct bucket for each workspace. You can only specify one bucket for all workspaces, but the s3 backend will add the workspace prefix to the path:
When using a non-default workspace, the state path will be /workspace_key_prefix/workspace_name/key (see also the workspace_key_prefix configuration).
And one dynamo table will suffice for all workspaces. So just use:
backend "s3" {
bucket = "data-pf-terraform-backend"
key = "terraform.tfstate"
region = "ap-southeast-1"
dynamodb_table = "data-pf-snowflake-terraform-state-lock"
}
And switch workspaces as appropriate before deployments.
But how is Jhonny's answer any different? You still cannot put variables in backend.conf, which was the initial question.
Initializing the backend...
╷
│ Error: Variables not allowed
│
│ on backend.conf line 1:
│ 1: bucket = "server-${var.account_id}"
│
│ Variables may not be used here.
The only way for now is to use a wrapper script that provides env variables, unfortunately.
You could checkout terragrunt, which is a thin wrapper that provides extra tools for keeping your configurations DRY, working with multiple Terraform modules, and managing remote state.
See here: https://terragrunt.gruntwork.io/docs/getting-started/quick-start/#keep-your-backend-configuration-dry
Check Jhonny's solution first:
https://stackoverflow.com/a/69664785/132438
(keeping this one for historical reference)
Seems like a specific instance of a more common problem in Terraform: Concatenating variables.
Using locals to concatenate should fix it. See https://www.terraform.io/docs/configuration/locals.html
An example from https://stackoverflow.com/a/61506549/132438:
locals {
BUCKET_NAME = [
"bh.${var.TENANT_NAME}.o365.attachments",
"bh.${var.TENANT_NAME}.o365.eml"
]
}
resource "aws_s3_bucket" "b" {
bucket = "${element(local.BUCKET_NAME, 2)}"
acl = "private"
}

Terraform - A reference to resource type must be followed by at least one attribute access, specifying the resource name

I am trying to use terraform string function and string concatenation on a terraform tfvars variable. but when run the terraform plan it through the below exception
Error: A reference to a resource type must be followed by at least one attribute
access, specifying the resource name.
Following is the terraform code
locals {
name_suffix = "${var.namespace != "" ? var.namespace : var.env}"
}
resource "azurerm_container_registry" "my_acr" {
name = "myacr${replace(name_suffix, "-", "")}"
location = "${azurerm_resource_group.location}"
resource_group_name = "${azurerm_resource_group.name}"
sku = "Basic"
admin_enabled = true
}
Here namespace value will be resolved at runtime.
Terraform version 0.12.7
it was a silly mistake. instead of name_suffix, I should have written it like local.name_suffix inside the acr resource
Had a similar issue when setting up Terraform configuration files for AWS Fargate.
Got the error below:
│ Error: Invalid reference
│
│ on ../ecs/main.tf line 72, in resource "aws_ecs_service" "aes":
│ 72: type = order_placement_type
│
│ A reference to a resource type must be followed by at least one attribute access, specifying the resource name.
╵
╷
│ Error: Invalid reference
│
│ on ../ecs/main.tf line 73, in resource "aws_ecs_service" "aes":
│ 73: field = order_placement_field
│
│ A reference to a resource type must be followed by at least one attribute access, specifying the resource name.
The issue was that I missed the var prefix for variables, so instead of this:
ordered_placement_strategy {
type = order_placement_type
field = order_placement_field
}
I corrected it to this:
ordered_placement_strategy {
type = var.order_placement_type
field = var.order_placement_field
}
That's all.
Another thing to check. Make sure you have the index specifier in the correct position.
I had the following code and ran into this problem:
data "cloudflare_origin_ca_root_certificate" "current" {
count = var.domain == null ? 0 : 1
algorithm = tls_private_key.privateKey[0].algorithm
}
resource "aws_acm_certificate" "cert" {
count = var.domain == null ? 0 : 1
#...
certificate_chain = data.cloudflare_origin_ca_root_certificate[0].current.cert_pem
}
Turns out I made the mistake of putting the [0] before the current selector instead of after. So I just had to change the certificate_chain line to the following:
certificate_chain = data.cloudflare_origin_ca_root_certificate.current[0].cert_pem

Resources