Cannot assign variable from data.tf to variables.tf file - azure

New to terraform, and have been building out the infrastructure recently.
I am trying to pull secrets from azure key vault and assign the keys to the variables.tf file depending on the environment(dev.tfvars, test.tfvars, etc). However when I execute the plan with the tfvar file as the parameter, I get an error with the following message:
Error: Variables not allowed
Here are the files and the relevant contents of it.
variables.tf:
variable "user_name" {
type = string
sensitive = true
}
data.tf (referencing the azure key vault):
data "azurerm_key_vault" "test" {
name = var.key_vault_name
resource_group_name = var.resource_group
}
data "azurerm_key_vault_secret" "test" {
name = "my-key-vault-key-name"
key_vault_id = data.azurerm_key_vault.test.id
}
test.tfvars:
user_name = "${data.azurerm_key_vault_secret.test.value}" # Where the error occurrs
Can anyone point out what I'm doing wrong here? And if so is there another way to achieve such a thing?

In Terraform a variable can be used for user input only. You can not assign to them anything dynamically computed from your code. They are like read-only arguments, for more info see Input Variables from the doc.
If you want to assign a value to something for later use, you must use locals. For example:
locals {
user_name = data.azurerm_key_vault_secret.test.value
}
Local values can be changed dynamically during execution. For more info, see Local Values.

You can't create dynamic variables. All variables must have known values before execution of your code. The only thing you could do is to use local, instead of variabile:
locals {
user_name = data.azurerm_key_vault_secret.test.value
}
and then refer to it as local.user_name.

Related

Validating through terraform that a vault policy exists before using it in a group

I have the following structure
module "policies" {
source = "../../../../path/to/my/custom/modules/groups"
for_each = var.config.policies
name = each.key
policy = each.value
}
module "groups" {
source = "../../../../path/to/my/custom/modules/groups"
for_each = var.config.groups
name = each.key
type = each.value.type
policies = each.value.policies
depends_on = [
module.policies
]
}
Policies and groups are declared in a yaml file from which through yamldecode the corresponding variables to for_each are created.
Is there any way to make sure that the policies passed to policies = each.value.policies of the groups module DO exist?
I mean, OK I have the depends_on clause, but I want to also provision for typos in the yaml file and other similar situations.
The usual way to declare a dependency on an external object (managed elsewhere) in Terraform is to use a data block using a data source defined by the provider responsible for that object. If the goal is only to verify that the object exists then it's enough to declare the data source and then have your downstream object's configuration refer to anything about its result, just so Terraform can see that the data source is a dependency and so should be resolved first.
Unfortunately it seems like the hashicorp/vault provider doesn't currently have a data source for declaring a dependency on a policy, although there is a feature request for it.
Assuming that it did exist then the pattern might look something like this:
data "vault_policy" "needed" {
for_each = var.config.policies
name = each.value
}
module "policies" {
source = "../../../../path/to/my/custom/modules/groups"
for_each = var.config.policies
name = each.key
# Accessing this indirectly via the data resource tells
# Terraform that it must complete the data lookup before
# planning anything which depends on this "policy" argument.
policy = data.vault_policy.needed[each.key].name
}
Without a data source for this particular object type I don't think there will be an elegant way to solve this, but you may be able to work around it by using a more general data source like hashicorp/external's external data source for collecting data by running an external program that prints JSON.
Again because you don't actually seem to need any specific data from the policy and only want to check whether it exists, it would be sufficient to write an external program which queries vault and then exists with an unsuccessful status if the request fails, or prints an empty JSON object {} if the request succeeds.
data "external" "vault_policy" {
for_each = var.config.policies
program = ["${path.module}/query-vault"]
query = {
policy_name = each.value
}
}
module "policies" {
source = "../../../../path/to/my/custom/modules/groups"
for_each = var.config.policies
name = each.key
policy = data.external.vault_policy.query.policy_name
}
I'm not familiar enough with Vault to suggest a specific implementation of this query-vault program, but you may be able to use a shell script wrapping the vault CLI program if you follow the advice in Processing JSON in shell scripts. You only need to do the input parsing part of that, because your result would be communicated either by exit 1 to signal failure or echo '{}' followed by exiting successfully to signal success.

terraform variable default value interpolation from locals

I have a use case where I need two AWS providers for different resources. The default aws provider is configured in the main module which uses another module that defines the additional aws provider.
By default, I'd like both providers to use the same AWS credentials unless explicitly overridden.
I figured I could do something like this. In the main module:
locals {
foo_cloud_access_key = aws.access_key
foo_cloud_secret_key = aws.secret_key
}
variable "foo_cloud_access_key" {
type = string
default = local.foo_cloud_access_key
}
variable "foo_cloud_secret_key" {
type = string
default = local.foo_cloud_secret_key
}
where variables foo_cloud_secret_key and foo_cloud_access_key would then be passed down to the child module like this:
module foobar {
...
foobar_access_key = var.foo_cloud_access_key
foobar_secret_key = var.foo_cloud_secret_key
...
}
Where module foobar would then configure its additional was provide with these variables:
provider "aws" {
alias = "foobar_aws"
access_key = var.foobar_access_key
secret_key = var.foobar_secret_key
}
When I run the init terraform spits out this error (for both variables):
Error: Variables not allowed
on variables.tf line 66, in variable "foo_cloud_access_key":
66: default = local.foo_cloud_access_key
Variables may not be used here.
Is it possible to achieve something like this in terraform or is there any other way to go about this?
Having complex, computed default values of variables is possible, but only with a workaround:
define a dummy default value for the variable, e.g. null
define a local variable, its value is either the value of the variable or the actual default value
variable "something" {
default = null
}
locals {
some_computation = ... # based on whatever data you want
something = var.something == null ? local.some_computation : var.something
}
And then only only use local.something instead of var.something in the rest of the terraform files.

Iterate over map with lists in terraform 0.12

I am using terraform 0.12.8 and I am trying to write a resource which would iterate over the following variable structure:
variable "applications" {
type = map(string)
default = {
"app1" = "test,dev,prod"
"app2" = "dev,prod"
}
}
My resource:
resource "aws_iam_user" "custom" {
for_each = var.applications
name = "circleci-${var.tags["ServiceType"]}-user-${var.tags["Environment"]}-${each.key}"
path = "/"
}
So, I can iterate over my map. However, I can't figure out how to verify that var.tags["Environment"] is enabled for specific app e.g. app1.
Basically, I want to ensure that the resource is created for each application as long as the Environment variable is in the list referencing app name in the applications map.
Could someone help me out here?
Please note that I am happy to go with a different variable structure if you have something to propose that would accomplish my goal.

How to pass down file path to module?

I am learning terraform modules.
I've created module for Google Provider.
provider "google" {
credentials = "${var.credentials}"
project = "${var.project_id}"
region = "${var.region}"
zone = "${var.zone}"
}
I want to pass credential file path form the module consuming above.
Here is the consumer module.
main.tf
module "google" {
source = "../modules/google-provider"
project_id = "${var.project_id}"
credentials = "${var.credentials}"
}
variables.tf
variable "credentials" {
default = "${file("cred.json")}"
}
This is the error I am getting:
Error: variable "credentials": default may not contain interpolations
I read this stackoverflow comment but did not understand how it will work.
Thank you for the help in advance.
from the docs,
When you declare variables in the root module of your configuration,
you can set their values using CLI options and environment variables.
When you declare them in child modules, the calling module should pass
values in the module block.
In your case,
#This is your calling module, hence you need to pass variables to child module from here
module "google" {
source = "../modules/google-provider"
passed_project_id_to_child = "${var.project_id}"
passed_credentials_to_child = "${var.credentials}"
}
UPDATE: for some reasons, terraform is not allowing you to read file with interpolation syntax create a data source of type local_file docs
data "local_file" "credJSON" {
filename = "./cred.json"
}
then you will need to do something like this in your module's configuration file or you can also create a seperate file for that too,
variable passed_project_id_to_child{
default = "${jsonencode(data.credJSON.content).projectId}"
}
variable passed_credentials_to_child{}
provider "google" {
credentials = "${var.passed_project_id_to_child}"
project = "${var.passed_project_id_to_child}"
region = "${var.region}"
zone = "${var.zone}"
}
Hopefully this works.
Read more here

Attempting to use list of stacks output as module output

I have a module that creates a variable number of CloudFormation Stacks. This works just fine but I am having problems attempting to use the Stack output as output in the module. The stack creates a subnet and I specify the created subnet id as an output of the stack. Then I want to return a list of all subnet ids as part of module output. This is what I think my output should look like:
output "subnets" {
value = ["${aws_cloudformation_stack.subnets.*.outputs["Subnet"]}"]
}
I get an integer parse error when I do that. Terraform seems to be treating outputs as a list instead of a map. Any way to get this to work?
Edit: Here is where I declare the stacks:
resource "aws_cloudformation_stack" "subnets" {
count = "${local.num_zones}"
name = "Subnet-${element(local.availability_zones, count.index)}"
on_failure = "DELETE"
template_body = "${file("${path.module}/templates/subnet.yaml")}"
parameters {
CIDR = "${cidrsubnet(var.cidr,ceil(log(local.num_zones * 2, 2)), count.index)}"
AZ = "${element(local.availability_zones, count.index)}"
VPC = "${aws_cloudformation_stack.vpc.outputs["VPCId"]}"
}
}
Then there is a Stack output in subnet.yaml that is has key Subnet and is the id of the subnet that was created.
The stacks are all created successfully but I can't seem to get exporting all the created subnet ids from my terraform module. Not sure why terraform is treating *.outputs as list vs keeping *.outputs["Subnet"] as the list. I'm guessing *.outputs is getting converted to a list of maps but I need a list of a specific key (Subnet) in the map.
I've got a non list example working for stack and using output from stack as terraform module output:
resource "aws_cloudformation_stack" "vpc" {
name = "${var.name_prefix}-VPC"
on_failure = "DELETE"
template_body = "${file("${path.module}/templates/vpc.yaml")}"
parameters {
CIDR = "${var.cidr}"
}
}
output "vpc" {
value = "${aws_cloudformation_stack.vpc.outputs["VPCId"]}"
}
I was able to work around the issue by declaring data to lookup the subnets after creation. It's not ideal but gets me passed being stuck. Let me know if anyone knows how to do what I was originally trying to do. Here is what I came up with:
data "aws_subnet_ids" "subnets" {
depends_on = ["aws_cloudformation_stack.subnets"]
vpc_id = "${aws_cloudformation_stack.vpc.outputs["VPCId"]}"
}
output "subnets" {
value = "${data.aws_subnet_ids.subnets.ids}"
}

Resources