Terraform v0.12.17
I have a script where I want to do this, that is, I want it to search AMIs with a passed-in jenkins_version variable
$ terraform plan -var "jenkins_version=2.249.3" -out out.output
data "aws_ami" "jenkins_master_ami" {
most_recent = true
filter {
name = "name"
values = ["packer-jenkins-master-${var.jenkins_version}"]
}
owners = ["1234567890"]
}
In my example, I want it to give me the AMI with name packer-jenkins-master-2.249.3 which I know exists because I just created it, and with the correct owner. However I get an error, since I obviously have the wrong syntax. What's the correct syntax?
Error: Your query returned no results. Please change your search criteria and try again.
Based on the comments.
I verified the data.aws_ami.jenkins_master_ami using my own sandbox account, and the data source definition is correct.
It returns AMI named packer-jenkins-master-2.249.3 as expected, if it exists in the given region and account.
Related
I am trying to configure a specific list of user with terraform in a dynamic section.
First, I have all my users / password as a json in a Vault like this:
{
"user1": "longPassword1",
"user2amq": "longPassword2",
"user3": "longPassword3"
}
then I declare the Vault data with
data "vault_kv_secret_v2" "all_clients" {
provider = my.vault.provider
mount = "credentials/aws/amq"
name = "dev/clients"
}
and in a locals section:
locals {
all_clients = tomap(jsondecode(data.vault_kv_secret_v2.all_clients.data.client_list))
}
in my tf file, I declare the dynamic section like this:
dynamic "user" {
for_each = local.all_clients
content {
username = each.key
password = each.value
console_access = "false"
groups = ["users"]
}
}
When I apply my terraform I got an error:
│ on modules/amq/amq.tf line 66, in resource "aws_mq_broker" "myproject":
│ 66: for_each = local.all_clients
│
│ Cannot use a map of string value in for_each. An iterable collection is
│ required.
I tried many ways to manage such a map but always terminating with an error
(like using = instead : and bypassing the jsondecode, or having a Map or a List of Object with
"username": "user1"
"pasword": "pass1"
etc... (I am open to adjust the Json for making it working)
Nothing was working and I am a bit out of idea how to map such a simple thing into terraform. I already check plenty of questions/answers in SO and none are working for me.
Terraform version 1.3.5
UPDATE:
By just putting an output on the local variable outside my module:
locals {
all_clients = jsondecode(data.vault_kv_secret_v2.all_clients.data.client_list)
}
output all_clients {
value = local.all_clients
}
after I applied the code, the command terraform output -json all_clients will show my json structure properly (and same if I put = instead and just displayed as a map, without the jsondecode).
As the answer says, the issue is more related to sensitiveness while declaring the loop.
On the other side, I had to adjust my username not being emails because not supported by AWS AmazonMQ (ActiveMQ) and password field must be greater than 12 chars (max 250 chars).
I think the problem here is something Terraform doesn't support but isn't explaining well: you can't use a map that is marked as sensitive directly as the for_each expression, because doing so would disclose some information about that sensitive value in a way that Terraform can't hide in the UI. At the very least, it would expose the number of elements.
It seems like in this particular case it's overly conservative to consider the entire map to be sensitive, but neither Vault nor Terraform understand the meaning of your data structure and so are treating the whole thing as sensitive just to make sure nothing gets disclosed accidentally.
Assuming that only the passwords in this result are actually sensitive, I think the best answer would be to specify explicitly what is and is not sensitive using the sensitive and nonsensitive functions to override the very coarse sensitivity the hashicorp/vault provider is generating:
locals {
all_clients = tomap({
for user, password in jsondecode(nonsensitive(data.vault_kv_secret_v2.all_clients.data.client_list)) :
user => sensitive(password)
})
}
Using nonsensitive always requires care because it's overriding Terraform's automatic inference of sensitive values and so if you use it incorrectly you might show sensitive information in the Terraform UI.
In this case I first used nonsensitive on the whole JSON string returned by the vault_kv_secret_v2 data source, which therefore avoids making the jsondecode result wholly sensitive. Then I used the for expression to carefully mark just the passwords as sensitive, so that the final value would -- if assigned to somewhere that causes it to appear in the UI -- appear like this:
tomap({
"user1" = (sensitive value)
"user2amq" = (sensitive value)
"user3" = (sensitive value)
})
Now that the number of elements in the map and the map's keys are no longer sensitive, this value should be compatible with the for_each in a dynamic block just like the one you showed. Because the value of each element is sensitive, Terraform will treat the password argument value in particular as sensitive, while disclosing the email addresses.
If possible I would suggest testing this with fake data that isn't really sensitive first, just to make sure that you don't accidentally expose real sensitive data if anything here isn't quite correct.
I'm pretty new to Terraform. I'm trying to use the sops provider plugin for encrypting secrets from a yaml file:
Sops Provider
I need to create a Terraform user object for a later provisioning stage like this example:
users = [{
name = "user123"
password = "password12"
}]
I've prepared a secrets.values.enc.yaml file for storing my secret data:
yaml_users:
- name: user123
password: password12
I've encrypted the file using "sops" command. I can decrypt the file successfully for testing purposes.
Now I try to use the encrypted file in Terraform for creating the user object:
data "sops_file" "test-secret" {
source_file = "secrets.values.enc.yaml"
}
# user data decryption
users = yamldecode(data.sops_file.test-secret.raw).yaml_users
Unfortunately I cannot debug the data or the structure of "users" as Terraform doesn't display sensitive data. When I try to use that users variable for the later provisioning stage than it doesn't seem to be what is needed:
Cannot use a set of map of string value in for_each. An iterable
collection is required.
When I do the same thing with the unencrypted yaml file everything seems to be working fine:
users = yamldecode(file("secrets.values.dec.yaml")).yaml_users
It looks like the sops provider decryption doesn't create an array or that "iterable collection" that I need.
Does anyone know how to use the terraform sops provider for decrypting an array of key-value pairs? A single value like "adminpassword" is working fine.
I think the "set of map of string" part of this error message is the important part: for_each requires either a map directly (in which case the map keys become the instance identifiers) or a set of individual strings (in which case those strings become the instance identifiers).
Your example YAML file shows yaml_users being defined as a YAML sequence of maps, which corresponds to a tuple of objects on conversion with yamldecode.
To use that data structure with for_each you'll need to first project it into a map whose keys will serve as the unique identifier for each instance of the resource. Assuming that the name values are suitably unique, you could project it so that those values are the keys:
data "sops_file" "test-secret" {
source_file = "secrets.values.enc.yaml"
}
locals {
users = tomap({
for u in yamldecode(data.sops_file.test-secret.raw).yaml_users :
u.name => u
})
}
The result being a sensitive value adds an extra wrinkle here, because Terraform won't allow using a sensitive value as the identifier for an instance of a resource -- to do so would make it impossible to show the resource instance address in the UI, and impossible to describe the instance on the command line for commands that need that.
However, this does seem like exactly the use-case shown in the example of the nonsensitive function at the time I'm writing this: you have a collection that is currently wholly marked as sensitive, but you know that only parts of it are actually sensitive and so you can use nonsensitive to explain to Terraform how to separate the nonsensitive parts from the sensitive parts. Here's an updated version of the locals block in my previous example using that function:
locals {
users = tomap({
for u in yamldecode(data.sops_file.test-secret.raw).yaml_users :
nonsensitive(u.name) => u
})
}
If I'm making a correct assumption that it's only the passwords that are sensitive and that the usernames are okay to disclose, the above will produce a suitable data structure where the usernames are visible in the keys but the individual element values will still be marked as sensitive.
local.users then meets all of the expectations of resource for_each, and so you should be able to use it with whichever other resources you need to repeat systematically for each user.
Please note that Terraform's tracking of sensitive values is for UI purposes only and will not prevent this passwords from being saved in the state as a part of whichever resources make use of them. If you use Terraform to manage sensitive data then you should treat the resulting state snapshots as sensitive artifacts in their own right, being careful about where and how you store them.
Is there some special variable available in Terraform configuration files which would point to current file name?
I'd like to use it for description fields in various resources, so that someone seeing these resources in the systems would know where is the master definition for them.
e.g.
in myinfra.tf
resource "aws_iam_policy" "my_policy" {
name = "something-important"
description = "Managed by Terraform at ${HERE_I_WOULD_LIKE_TO_USE_THE_VARIABLE}"
policy = <<EOF
[...]
EOF
}
And I would hope the description becomes:
description = "Managed by Terraform at myinfra.tf"
I tried ${path.module} but that only gives "filesystem path of the module where the expression is placed", so pragmatically speaking - everything but the file name I want.
Here's what I can share. Use the data external resource to call an external script that would get the directory/file name and then return it back as a string or any other type that your resources require. Obviously it's not exactly what you wanted as you'll get the dir/file name indirectly but hopefully it helps for others or even yourself for use-cases.
We use that only for azurerm and for very complex integrations that are not yet supported with the current provider versions. I have have not tested it specifically for AWS but since it's a core Terraform resource provider, I'm guessing it might work across the board.
data "external" "cwd" {
program = ["./script.sh"]
query = {
cwd = "${path.cwd}"
}
}
resource "aws_iam_policy" "my_policy" {
name = "something-important"
description = "Managed by Terraform at ${data.external.dir_script.result.filename}"
policy = <<EOF
[...]
EOF
This is how my script looks like:
#!/bin/sh
#echo '{"cwd":"for_testing"}' | ./dir_name.sh | xargs
PIPED=`cat`
errPrint "INFO: Got PIPED data:\n$PIPED"
DIR=`jq -r .cwd <<< $PIPED`
cd $DIR
filename=`ls | grep \.tf$ | xargs`
errPrint "INFO: Returning this as STDOUT:${filename}"
echo "{\"name\":\"$filename\"}"
You need to be that the data from the script needs to return a valid JSON object.
The program must then produce a valid JSON object on stdout, which will be used to populate the result attribute exported to the rest of the Terraform configuration. This JSON object must again have all of its values as strings. On successful completion it must exit with status zero.
Unfortunately, like the others mentioned, there's no other way to get the current file name being 'applied'.
I think you might benefit from using something like yor from Bridge Crew.
From the project's README:
Yor is an open-source tool that helps add informative and consistent tags across infrastructure-as-code frameworks such as Terraform, CloudFormation, and Serverless.
Yor is built to run as a GitHub Action automatically adding consistent tagging logics to your IaC. Yor can also run as a pre-commit hook and a standalone CLI.
So basically, it updates your resources tags with things like:
tags = {
env = var.env
yor_trace = "912066a1-31a3-4a08-911b-0b06d9eac64e"
git_repo = "example"
git_org = "bridgecrewio"
git_file = "applyTag.md"
git_commit = "COMMITHASH"
git_modifiers = "bana/gandalf"
git_last_modified_at = "2021-01-08 00:00:00"
git_last_modified_by = "bana#bridgecrew.io"
}
Maybe that would be good enough to provide what you're trying to do?
As far as my testimony, I have not used yor since my tagging uses a different approach. Instead of having "raw" tags, we use a label module that builds the tags for us and then merges in local tags.
Just sharing this info FYI in case it helps.
I have the following use-case: I'm using a combination of the Azure DevOps pipelines and Terraform to synchronize our TAP for Grafana (v7.4). Intention is that we can tweak and tune our dashboards on Test, and push the changes to Acceptance (and Production) via the pipelines.
I've got one pipeline that pulls in the state of the Test environment and writes it to a set of json files (for the dashboards) and a single json array (for the folders).
The second pipeline should use these resources to synchronize the Acceptance environment.
This works flawlessly for the dashboards, but I'm hitting a snag putting the dashboards in the right folder dynamically. Here's my latest working code:
resource "grafana_folder" "folders" {
for_each = toset(var.grafana_folders)
title = each.key
}
resource "grafana_dashboard" "dashboards" {
for_each = fileset(path.module, "../dashboards/*.json")
config_json = file("${path.module}/${each.key}")
}
The folder resources pushes the folders based on a variable list of names that I pass via variables. This generates the folders correctly.
The dashboard resource pushes the dashboards correctly, based on all dashboard files in the specified folder.
But now I'd like to make sure the dashboards end up in the right folder. The provider specifies that I need to do this based on the folder UID, which is generated when the folder is created. So I'd like to take the output from the grafana_folder resource and use it in the grafana_dashboard resource. I'm trying the following:
resource "grafana_folder" "folders" {
for_each = toset(var.grafana_folders)
title = each.key
}
resource "grafana_dashboard" "dashboards" {
for_each = fileset(path.module, "../dashboards/*.json")
config_json = file("${path.module}/${each.key}")
folder = lookup(transpose(grafana_folder.folders), "Station_Details", "Station_Details")
depends_on = [grafana_folder.folders]
}
If I read the Grafana Provider github correctly, the grafana_folder resource should output a map of [uid, title]. So I figured if I transpose that map, and (by way of test) lookup a folder title that I know exists, I can test the concept.
This gives the following error:
on main.tf line 38, in resource "grafana_dashboard" "dashboards":
38: folder = lookup(transpose(grafana_folder.folders),
"Station_Details", "Station_Details")
Invalid value for "default" parameter: the default value must have the
same type as the map elements.
Both Uid and Title should be strings, so I'm obviously overlooking something.
Does anyone have an inkling where I'm going wrong and/or have suggestions on how I can do this (better)?
I think the problem this error is trying to report is that grafana_folder.folders is a map of objects, and so passing it to transpose doesn't really make sense but seems to be succeeding because Terraform has found some clever way to do automatic type conversions to produce some result, but then that result (due to the signature of transpose) is a map of lists rather than a map of strings, and so "Station_Details" (a string, rather than a list) isn't a valid fallback value for that lookup.
My limited familiarity with folders in Grafana leaves me unsure as to what to suggest instead, but I expect the final expression will look something like the following:
folder = grafana_folder.folders[SOMETHING].id
SOMETHING here will be an expression that allows you to know for a given dashboard which folder key it ought to belong to. I'm not seeing an answer to that from what you shared in your question, but just as a placeholder to make this a complete answer I'll suggest that one option would be to make a local map from dashboard filename to folder name:
locals {
# a local value probably isn't actually the right answer
# here, but I'm just showing it as a placeholder for one
# possible way to map from dashboard filename to folder
# name. These names should all be elements of
# var.grafana_folders in order for this to work.
dashboard_folders = {
"example1.json" = "example-folder"
"example2.json" = "example-folder"
"example3.json" = "another-folder"
}
}
resource "grafana_dashboard" "dashboards" {
for_each = fileset("${path.module}/dashboards", "*.json")
config_json = file("${path.module}/dashboards/${each.key}")
folder = grafana_folder.folders[local.dashboard_folders[each.key]].id
}
I am using terraform to create a parameter in the AWS Parameter Store.
resource "aws_ssm_parameter" "username" {
name = "username"
type = "SecureString"
value = "to_be_defined"
overwrite = false
}
provider "aws" {
version = "~> 1.53"
}
When I run terraform apply for the first time, if the parameter does not exist terraform creates the parameter. However, if I run it again (usually with a different value) I get the error
ParameterAlreadyExists: The parameter already exists. To overwrite
this value, set the overwrite option in the request to true
If I understand correctly, this is due to the behaviour of AWS Cli (not specific to the provider).
The current behavior for overwrite = false is
If the parameter does not exist, create it
If the parameter exists, throw exception
What I want to achieve is
If the parameter does not exist, create it
If the parameter exists, do nothing
I did not find a way in AWS CLI documentation to achieve the desired behavior.
I would like to know if there is any way to achieve the desired behaviour using terraform (or directly via AWS CLI)
I agree with #ydaetskcoR that you should maintain the value with terraform state as well.
But if you insist to ignore the value to be updated if the SSM key is exist, you can use lifecycle ignore_changes(https://www.terraform.io/docs/configuration/resources.html#ignore_changes)
So in your case, you can update the code to
resource "aws_ssm_parameter" "username" {
name = "username"
type = "SecureString"
value = "to_be_defined"
overwrite = false
lifecycle {
ignore_changes = [
value,
]
}
}
overwrite - (Optional) Overwrite an existing parameter. If not specified, will default to false if the resource has not been created by terraform to avoid overwrite of existing resource and will default to true otherwise (terraform lifecycle rules should then be used to manage the update behavior).
By the way, it is not good design to manage SecureString SSM key/value with terraform, because its tfstate file is not encrypted.