I am trying to configure a specific list of user with terraform in a dynamic section.
First, I have all my users / password as a json in a Vault like this:
{
"user1": "longPassword1",
"user2amq": "longPassword2",
"user3": "longPassword3"
}
then I declare the Vault data with
data "vault_kv_secret_v2" "all_clients" {
provider = my.vault.provider
mount = "credentials/aws/amq"
name = "dev/clients"
}
and in a locals section:
locals {
all_clients = tomap(jsondecode(data.vault_kv_secret_v2.all_clients.data.client_list))
}
in my tf file, I declare the dynamic section like this:
dynamic "user" {
for_each = local.all_clients
content {
username = each.key
password = each.value
console_access = "false"
groups = ["users"]
}
}
When I apply my terraform I got an error:
│ on modules/amq/amq.tf line 66, in resource "aws_mq_broker" "myproject":
│ 66: for_each = local.all_clients
│
│ Cannot use a map of string value in for_each. An iterable collection is
│ required.
I tried many ways to manage such a map but always terminating with an error
(like using = instead : and bypassing the jsondecode, or having a Map or a List of Object with
"username": "user1"
"pasword": "pass1"
etc... (I am open to adjust the Json for making it working)
Nothing was working and I am a bit out of idea how to map such a simple thing into terraform. I already check plenty of questions/answers in SO and none are working for me.
Terraform version 1.3.5
UPDATE:
By just putting an output on the local variable outside my module:
locals {
all_clients = jsondecode(data.vault_kv_secret_v2.all_clients.data.client_list)
}
output all_clients {
value = local.all_clients
}
after I applied the code, the command terraform output -json all_clients will show my json structure properly (and same if I put = instead and just displayed as a map, without the jsondecode).
As the answer says, the issue is more related to sensitiveness while declaring the loop.
On the other side, I had to adjust my username not being emails because not supported by AWS AmazonMQ (ActiveMQ) and password field must be greater than 12 chars (max 250 chars).
I think the problem here is something Terraform doesn't support but isn't explaining well: you can't use a map that is marked as sensitive directly as the for_each expression, because doing so would disclose some information about that sensitive value in a way that Terraform can't hide in the UI. At the very least, it would expose the number of elements.
It seems like in this particular case it's overly conservative to consider the entire map to be sensitive, but neither Vault nor Terraform understand the meaning of your data structure and so are treating the whole thing as sensitive just to make sure nothing gets disclosed accidentally.
Assuming that only the passwords in this result are actually sensitive, I think the best answer would be to specify explicitly what is and is not sensitive using the sensitive and nonsensitive functions to override the very coarse sensitivity the hashicorp/vault provider is generating:
locals {
all_clients = tomap({
for user, password in jsondecode(nonsensitive(data.vault_kv_secret_v2.all_clients.data.client_list)) :
user => sensitive(password)
})
}
Using nonsensitive always requires care because it's overriding Terraform's automatic inference of sensitive values and so if you use it incorrectly you might show sensitive information in the Terraform UI.
In this case I first used nonsensitive on the whole JSON string returned by the vault_kv_secret_v2 data source, which therefore avoids making the jsondecode result wholly sensitive. Then I used the for expression to carefully mark just the passwords as sensitive, so that the final value would -- if assigned to somewhere that causes it to appear in the UI -- appear like this:
tomap({
"user1" = (sensitive value)
"user2amq" = (sensitive value)
"user3" = (sensitive value)
})
Now that the number of elements in the map and the map's keys are no longer sensitive, this value should be compatible with the for_each in a dynamic block just like the one you showed. Because the value of each element is sensitive, Terraform will treat the password argument value in particular as sensitive, while disclosing the email addresses.
If possible I would suggest testing this with fake data that isn't really sensitive first, just to make sure that you don't accidentally expose real sensitive data if anything here isn't quite correct.
Related
I know that I can generate random bytes in Terraform easily enough:
resource "random_id" "foo" {
byte_length = 32
}
resource "something_else" "foo" {
secret = sensitive(random_id.foo.b64std)
}
However, the output of the random_id resource is not marked sensitive, and the value is exposed in logs as the resource id, even though its use in something_else is redacted by the sensitive() function.
I know that random_password is treated as secure, but it doesn't provide the ability to generate raw random bytes.
Is there a good way to generate a secure bunch of random bytes as a Terraform-managed resource?
(I'm aware that the value will always be visible in the state file, but we manage that already. I'm worried about output log files that will much more widely visible.)
EDIT: I found a request to mark random_id secure but the idea was rejected as outside the intended use.
Why don't you use the base64encode function instead? Unfortunately Terraform doesn't support this yet(see link I posted below). Something like this might give you some ideas you could use -
resource "random_string" "this" {
length = 32
}
output "random_string_output" {
value = "'${base64encode(random_string.this.result)}'"
}
This GitHub issue talks about this limitation/workaround quite a bit
There is no such resource or data source in terraform. But you could develop your own custom data source that would produce those "truly random bytes" to your satisfaction.
Context: I'm developing a new resource for my TF Provider.
This foo resource has a name and associated config: a list of key value pairs (both sensitive and non-sensitive).
There're 3 options I've identified:
resource "foo" "option1" {
name = "option1"
config = {
"name" = "option1"
"errors.length" = 3
"tasks.type" = "FOO"
}
config_sensitive = {
"jira.key" = "..."
"credentials.json" = "..."
}
}
resource "foo" "option2" {
name = "option2"
config = {
"name" = "option1"
"errors.length" = 3
"tasks.type" = "FOO"
"jira.key" = "..."
"credentials.json" = "..."
}
}
resource "foo" "option3" {
name = "option3"
config = file("config.json")
}
The advantage of option #3 is it looks very readable but requires a user to store an extra json file (with secrets) in the same folder (I'm not sure how acceptable that setup is). Option #2 looks tempting but foo should accept updates and if we mark the whole block as sensitive (since it may contain secret key-value pairs), the update functionality will suffer (user won't see the expected change). So Option #1 is the winner in my eyes since it's the most explicit one and allows us to distinguish between sensitive and non-sensitive attributes (while allowing updates for non-sensitive ones). Reading from file the whole config is probably not ideal since it doesn't really allow an engineer to see how the config looks like without opening another file.
There's also this weird duplicated name attribute but let's ignore it for now.
What configuration is the most acceptable and used by other TF Providers?
Option #3 should be struck immediately for three reasons:
You cannot realiably use the sensitive flag in the schema struct like you can with 1 and 2.
It requires a JSON format value which is cumbersome to work with unless you are forced into it (e.g. security policies).
Someone could inline the JSON and not store it in a file, which would completely workaround your attempt to obscure the secrets.
Options 1 and 2 are honestly no different from a secrets management perspective. You could apply the sensitive flag to either in the nested schema struct on a per-attribute basis, and use e.g. Vault to pass in values on a KV basis for either.
I would opt for 1 over 2 simply because it appears to me from your question that the arguments and values in the two blocks have no relationship with each other. Therefore, it makes more sense to organize your schema into two separate blocks for code cleanliness purposes.
I will also mention that if it is possible to refactor the credentials.json into your provider, and leverage the JIRA provider for the jira.key, then that would be best practices by both code architecture and security. It is also how the major providers handle this situation.
Terraform providers should handle the credential/auth implementation and the resource handles the resource configuration.
e.g.
resource "jira_issue" "some_story" {
title = "My story"
type = "story"
labels = ["someexampleonstackoverflow","jakewashere"]
}
Notice there's no config that doesn't relate to the thing I'm creating inside the Terraform resource.
It's very acceptable to have some documented convention in your provider that reads credentials from somewhere, whether that's an OS variable, file on disk etc.
For example: The Google Cloud provider, will read an environment variable if it's populated, if not it'll attempt to read either a configuration file that sits inside a hidden directory within $HOME or attempts to read a localhost http metadata server for the credentials.
Context: I'm developing a TF provider (here's the official guide from HashiCorp).
I run into the following situation:
# main.tf
resource "foo" "example" {
id = "foo-123"
name = "foo-name"
lastname = "foo-lastname"
}
resource "bar" "example" {
id = "bar-123"
parent_id = foo.example.id
parent_name = foo.example.name
parent_lastname = foo.example.lastname
}
where I have to declare parent_name and parent_lastname (effectively duplicate them) explicitly to be able to read these values that are necessary for read / create request for Bar resource.
Is it possible to use a fancy trick with d *schema.ResourceData in
func resourceBarRead(ctx context.Context, d *schema.ResourceData, m interface{}) diag.Diagnostics {
to avoid duplicated in my TF config, i.e. have just:
resource "foo" "example" {
id = "foo-123"
name = "foo-name"
lastname = "foo-lastname"
}
resource "bar" "example" {
id = "bar-123"
parent_id = foo.example.id
}
infer foo.example.name and foo.example.lastname just based on foo.example.id in resourceBarRead() somehow so I won't have to duplicate those fields in both resources?
Obviously, this is minimal example and let's assume I need both foo.example.name and foo.example.lastname to send a read / create request for Bar resource? In other words, can I iterate through other resource in TF state / main.tf file based on target ID to find its other attributes? It seems to be a useful feauture howerever it's not mentioned in HashiCorp's guide so I guess it's undesirable and I have to duplicate those fields.
In Terraform's provider/resource model, each resource block is independent of all others unless the user explicitly connects them using references like you showed.
However, in most providers it's sufficient to pass only the id (or similar unique identifier) attribute downstream to create a relationship like this, because the other information about that object is already known to the remote system.
Without knowledge about the particular remote system you are interacting with, I would expect that you'd be able to use the value given as parent_id either directly in an API call (and thus have the remote system connect it with the existing object), or to make an additional read request to the remote API to look up the object using that ID and obtain the name and lastname values that were saved earlier.
If those values only exist in the provider's context and not in the remote API then I don't think there will be any alternative but to have the user pass them in again, since that is the only way that local values (as opposed to values persisted in the remote API) can travel between resources.
I'm pretty new to Terraform. I'm trying to use the sops provider plugin for encrypting secrets from a yaml file:
Sops Provider
I need to create a Terraform user object for a later provisioning stage like this example:
users = [{
name = "user123"
password = "password12"
}]
I've prepared a secrets.values.enc.yaml file for storing my secret data:
yaml_users:
- name: user123
password: password12
I've encrypted the file using "sops" command. I can decrypt the file successfully for testing purposes.
Now I try to use the encrypted file in Terraform for creating the user object:
data "sops_file" "test-secret" {
source_file = "secrets.values.enc.yaml"
}
# user data decryption
users = yamldecode(data.sops_file.test-secret.raw).yaml_users
Unfortunately I cannot debug the data or the structure of "users" as Terraform doesn't display sensitive data. When I try to use that users variable for the later provisioning stage than it doesn't seem to be what is needed:
Cannot use a set of map of string value in for_each. An iterable
collection is required.
When I do the same thing with the unencrypted yaml file everything seems to be working fine:
users = yamldecode(file("secrets.values.dec.yaml")).yaml_users
It looks like the sops provider decryption doesn't create an array or that "iterable collection" that I need.
Does anyone know how to use the terraform sops provider for decrypting an array of key-value pairs? A single value like "adminpassword" is working fine.
I think the "set of map of string" part of this error message is the important part: for_each requires either a map directly (in which case the map keys become the instance identifiers) or a set of individual strings (in which case those strings become the instance identifiers).
Your example YAML file shows yaml_users being defined as a YAML sequence of maps, which corresponds to a tuple of objects on conversion with yamldecode.
To use that data structure with for_each you'll need to first project it into a map whose keys will serve as the unique identifier for each instance of the resource. Assuming that the name values are suitably unique, you could project it so that those values are the keys:
data "sops_file" "test-secret" {
source_file = "secrets.values.enc.yaml"
}
locals {
users = tomap({
for u in yamldecode(data.sops_file.test-secret.raw).yaml_users :
u.name => u
})
}
The result being a sensitive value adds an extra wrinkle here, because Terraform won't allow using a sensitive value as the identifier for an instance of a resource -- to do so would make it impossible to show the resource instance address in the UI, and impossible to describe the instance on the command line for commands that need that.
However, this does seem like exactly the use-case shown in the example of the nonsensitive function at the time I'm writing this: you have a collection that is currently wholly marked as sensitive, but you know that only parts of it are actually sensitive and so you can use nonsensitive to explain to Terraform how to separate the nonsensitive parts from the sensitive parts. Here's an updated version of the locals block in my previous example using that function:
locals {
users = tomap({
for u in yamldecode(data.sops_file.test-secret.raw).yaml_users :
nonsensitive(u.name) => u
})
}
If I'm making a correct assumption that it's only the passwords that are sensitive and that the usernames are okay to disclose, the above will produce a suitable data structure where the usernames are visible in the keys but the individual element values will still be marked as sensitive.
local.users then meets all of the expectations of resource for_each, and so you should be able to use it with whichever other resources you need to repeat systematically for each user.
Please note that Terraform's tracking of sensitive values is for UI purposes only and will not prevent this passwords from being saved in the state as a part of whichever resources make use of them. If you use Terraform to manage sensitive data then you should treat the resulting state snapshots as sensitive artifacts in their own right, being careful about where and how you store them.
I have the following use-case: I'm using a combination of the Azure DevOps pipelines and Terraform to synchronize our TAP for Grafana (v7.4). Intention is that we can tweak and tune our dashboards on Test, and push the changes to Acceptance (and Production) via the pipelines.
I've got one pipeline that pulls in the state of the Test environment and writes it to a set of json files (for the dashboards) and a single json array (for the folders).
The second pipeline should use these resources to synchronize the Acceptance environment.
This works flawlessly for the dashboards, but I'm hitting a snag putting the dashboards in the right folder dynamically. Here's my latest working code:
resource "grafana_folder" "folders" {
for_each = toset(var.grafana_folders)
title = each.key
}
resource "grafana_dashboard" "dashboards" {
for_each = fileset(path.module, "../dashboards/*.json")
config_json = file("${path.module}/${each.key}")
}
The folder resources pushes the folders based on a variable list of names that I pass via variables. This generates the folders correctly.
The dashboard resource pushes the dashboards correctly, based on all dashboard files in the specified folder.
But now I'd like to make sure the dashboards end up in the right folder. The provider specifies that I need to do this based on the folder UID, which is generated when the folder is created. So I'd like to take the output from the grafana_folder resource and use it in the grafana_dashboard resource. I'm trying the following:
resource "grafana_folder" "folders" {
for_each = toset(var.grafana_folders)
title = each.key
}
resource "grafana_dashboard" "dashboards" {
for_each = fileset(path.module, "../dashboards/*.json")
config_json = file("${path.module}/${each.key}")
folder = lookup(transpose(grafana_folder.folders), "Station_Details", "Station_Details")
depends_on = [grafana_folder.folders]
}
If I read the Grafana Provider github correctly, the grafana_folder resource should output a map of [uid, title]. So I figured if I transpose that map, and (by way of test) lookup a folder title that I know exists, I can test the concept.
This gives the following error:
on main.tf line 38, in resource "grafana_dashboard" "dashboards":
38: folder = lookup(transpose(grafana_folder.folders),
"Station_Details", "Station_Details")
Invalid value for "default" parameter: the default value must have the
same type as the map elements.
Both Uid and Title should be strings, so I'm obviously overlooking something.
Does anyone have an inkling where I'm going wrong and/or have suggestions on how I can do this (better)?
I think the problem this error is trying to report is that grafana_folder.folders is a map of objects, and so passing it to transpose doesn't really make sense but seems to be succeeding because Terraform has found some clever way to do automatic type conversions to produce some result, but then that result (due to the signature of transpose) is a map of lists rather than a map of strings, and so "Station_Details" (a string, rather than a list) isn't a valid fallback value for that lookup.
My limited familiarity with folders in Grafana leaves me unsure as to what to suggest instead, but I expect the final expression will look something like the following:
folder = grafana_folder.folders[SOMETHING].id
SOMETHING here will be an expression that allows you to know for a given dashboard which folder key it ought to belong to. I'm not seeing an answer to that from what you shared in your question, but just as a placeholder to make this a complete answer I'll suggest that one option would be to make a local map from dashboard filename to folder name:
locals {
# a local value probably isn't actually the right answer
# here, but I'm just showing it as a placeholder for one
# possible way to map from dashboard filename to folder
# name. These names should all be elements of
# var.grafana_folders in order for this to work.
dashboard_folders = {
"example1.json" = "example-folder"
"example2.json" = "example-folder"
"example3.json" = "another-folder"
}
}
resource "grafana_dashboard" "dashboards" {
for_each = fileset("${path.module}/dashboards", "*.json")
config_json = file("${path.module}/dashboards/${each.key}")
folder = grafana_folder.folders[local.dashboard_folders[each.key]].id
}