Just an example scenario would be that I have a module, and that has variables. For example an S3 bucket would need a variable about the name of the bucket.
The variables.tf file would look like:
variable "bucket_name" { type = string }
Now if I want to use that module to create the S3 bucket, and then assign it to a cloudfront distribution, I have to assign the bucket name at that level to the bucket module. For example I want to give the distribution a name, I need to create a variables.tf for the distribution with the distribution name defined, but because I use the module of the S3 bucket, I have to include all variables of the S3 bucket as well. So the variables.tf file of the distribution would look like:
variable "bucket_name" { type = string }
variable "distribution_name" { type = string }
If I have a module that is only for us-east-1, and it needs the cloudfront distribution, I need to create a variables.tf file for that as well containing all the above.
If I for example have multiple AWS accounts for staging, and prod, and want the distribution included in both (or for only one, doesn't matter), I have to create yet another variables.tf file containing all the above.
Every time I would add a new variable, I'd have to update at least 4 variables.tf files which is definitely not a way (terraform is supposed to reduce hard coding).
What I would like is as I go up the ladder (from s3 to environment module), Import the variables.tf file of the "child module", and extend it.
Just for an imaginary scenario, I'd like to:
# bucket/variables.tf
variable "bucket_name" { type = string }
# distribution/variables.tf
import_variables {
source = "../bucket"
}
# us-east-1/variables.tf
import_variables {
source = "../distribution"
}
# staging/us-east-1/variables.tf
import_variables {
source = "../../us-east-1"
}
Am I having a completely wrong approach to terraform, or I just missed a method with which this variable definition sharing could be done?
Related
I have a terraform plan that defines most of my BQ environment.
I'm working on a cross-region deployment which will replicate some of my tables to multiple regions.
Rather than copy pasting the same module in every place that I need it at, I'd like to define the module in one place and just call that on every configuration that needs it.
Example I have the following file structure
./cross_region_tables
-> tables.tf
./foo
-> tables.tf
./bar
-> tables.tf
I'd like to define some_module in ./cross_region_tables/tables.tf like so
output "some_module" {
x = something
region = var.region
}
Then I'd like just call some_module from ./foo/tables.tf
The problem is that I don't know how to call this specific module, since ./cross_region_tables/tables.tf will contain several table definitions (as output objects). I know how to import a child module, but I don't know how to call a specific output within that child module
I've solved the issue by adding a module object to the child module with a variable for the region, then calling the child from each regional configuration and passing the region as a variable.
in child folder main.tf:
variable "region" = {}
module "foo" {
x = "something"
y = "something_else"
region = var.region
}
in regional folder for regionX
variable "region" = {
default = regionX
}
module "child" {
source = "../path/to/child"
region = var.region
}
in regional folder for regionY
variable "region" = {
default = regionY
}
module "child" {
source = "../path/to/child"
region = var.region
}
repeat for as many regions as necessary.
You can pass the provider to your modules and each provider with a different
region...
That is well documented here:
https://www.terraform.io/language/modules/develop/providers#passing-providers-explicitly
# The default "aws" configuration is used for AWS resources in the root
# module where no explicit provider instance is selected.
provider "aws" {
region = "us-west-1"
}
# An alternate configuration is also defined for a different
# region, using the alias "usw2".
provider "aws" {
alias = "usw2"
region = "us-west-2"
}
# An example child module is instantiated with the alternate configuration,
# so any AWS resources it defines will use the us-west-2 region.
module "example" {
source = "./example"
providers = {
aws = aws.usw2
}
}
The other part is what you mentioned:
The problem is that I don't know how to call this specific module, since ./cross_region_tables/tables.tf will contain several table definitions
Resources within that module (cross_region_tables) can be turned off/on with variables
New to terraform, and have been building out the infrastructure recently.
I am trying to pull secrets from azure key vault and assign the keys to the variables.tf file depending on the environment(dev.tfvars, test.tfvars, etc). However when I execute the plan with the tfvar file as the parameter, I get an error with the following message:
Error: Variables not allowed
Here are the files and the relevant contents of it.
variables.tf:
variable "user_name" {
type = string
sensitive = true
}
data.tf (referencing the azure key vault):
data "azurerm_key_vault" "test" {
name = var.key_vault_name
resource_group_name = var.resource_group
}
data "azurerm_key_vault_secret" "test" {
name = "my-key-vault-key-name"
key_vault_id = data.azurerm_key_vault.test.id
}
test.tfvars:
user_name = "${data.azurerm_key_vault_secret.test.value}" # Where the error occurrs
Can anyone point out what I'm doing wrong here? And if so is there another way to achieve such a thing?
In Terraform a variable can be used for user input only. You can not assign to them anything dynamically computed from your code. They are like read-only arguments, for more info see Input Variables from the doc.
If you want to assign a value to something for later use, you must use locals. For example:
locals {
user_name = data.azurerm_key_vault_secret.test.value
}
Local values can be changed dynamically during execution. For more info, see Local Values.
You can't create dynamic variables. All variables must have known values before execution of your code. The only thing you could do is to use local, instead of variabile:
locals {
user_name = data.azurerm_key_vault_secret.test.value
}
and then refer to it as local.user_name.
I have a tf.json file that declares a bunch of local variables. One of the variables is a array of complex objects like so:
{
"locals": [
{
"ordered_cache_behaviors": [
{
"path_pattern": "/poc-app-angular*",
"s3_target": "dev-ui-static",
"ingress": "external"
}
]
}
]
}
Here is what I want to do... instead of declaring the variable ordered_cache_behaviors statically in my file I want this to be a computed value. I will get this configuration from a S3 bucket and set the value here. So the value statically will only be an empty array [] that I will append to with a script after getting the data from S3.
This logic needs to execute each time before a terraform plan or terraform apply. What is the best way to do this? I am assuming I need to use a Provisioner to fire off a script? If so how do I then set the local variable here?
If the cache configuration data can be JSON-formatted, you may be able to use the s3_bucket_object datasource plus the jsondecode function as an alternative approach:
Upload your cache configuration data to the poc-app-cache-config bucket as cache-config.json, and then use the following to have Terraform download that file from S3 and parse it into your local ordered_cache_behaviors variable:
data "aws_s3_bucket_object" "cache_configuration" {
bucket = "poc-app-cache-config"
key = "cache-config.json" # JSON-formatted cache configuration map
}
...
locals {
ordered_cache_behaviors = jsondecode(aws_s3_bucket_object.cache_configuration.body)
}
I am trying to use etag when i update my bucket S3, but i get this error:
Error: Error in function call
on config.tf line 48, in resource "aws_s3_bucket_object" "bucket_app":
48: etag = filemd5("${path.module}/${var.env}/app-config.json")
|----------------
| path.module is "."
| var.env is "develop"
Call to function "filemd5" failed: no file exists at develop/app-config.json.
However, this works fine:
resource "aws_s3_bucket_object" "bucket_app" {
bucket = "${var.app}-${var.env}-app-assets"
key = "config.json"
source = "${path.module}/${var.env}/app-config.json"
// etag = filemd5("${path.module}/${var.env}/app-config.json")
depends_on = [
local_file.app_config_json
]
}
I am genereting the file this way:
resource "local_file" "app_config_json" {
content = local.app_config_json
filename = "${path.module}/${var.env}/app-config.json"
}
I really don't get what i am doing wrong...
If you happen to arrive here and are using an archive_file Data Source, there is an exported attribute called output_md5. This seems to provide the same results that you would get from filemd5(data.archive_file.app_config_json.output_path).
Here is a full example:
data archive_file config {
type = "zip"
output_path = "${path.module}/config.zip"
source {
filename = "config/template-configuration.json"
content = "some content"
}
}
resource aws_s3_bucket_object config{
bucket = aws_s3_bucket.stacks.bucket
key = "config.zip"
content_type = "application/zip"
source = data.archive_file.config.output_path
etag = data.archive_file.config.output_md5
}
All functions in Terraform run during the initial configuration processing, not during the graph walk. For all of the functions that read files on disk, that means that the files must be present on disk prior to running Terraform as part of the configuration itself -- usually, checked in to version control -- rather than being generated dynamically during the Terraform operation.
The documentation for [file], which filemd5 builds on, has the following to say about it:
This function can be used only with files that already exist on disk at the beginning of a Terraform run. Functions do not participate in the dependency graph, so this function cannot be used with files that are generated dynamically during a Terraform operation. We do not recommend using dynamic local files in Terraform configurations, but in rare situations where this is necessary you can use the local_file data source to read files while respecting resource dependencies.
As the documentation there suggests, the local_file data source provides a way to read a file into memory as a resource during the graph walk, although its result would still need to be passed to md5 to get the result you needed here.
Because you're creating the file with a local_file resource anyway, you can skip the need for the additional data resource and derive the MD5 hash directly from your local_file.app_config_json resource:
resource "aws_s3_bucket_object" "bucket_app" {
bucket = "${var.app}-${var.env}-app-assets"
key = "config.json"
source = local_file.app_config_json.filename
etag = md5(local_file.app_config_json.content)
}
Note that we don't need to use depends_on if we derive the configuration from attributes of the local_file.app_config_json resource, because Terraform can therefore already see that the dependency relationship exists.
I had a flat structure of all my .tf files and want to migrate to a folder (i.e. module) based set up so that my code is clearer.
For example I have moved my instance and elastic IP (eip) definitions in separate folders
/terraform
../instance
../instance.tf
../eip
../eip.tf
In my instance.tf:
resource "aws_instance" "rancher-node-production" {}
In my eip.tf:
module "instance" {
source = "../instance"
}
resource "aws_eip" "rancher-node-production-eip" {
instance = "${module.instance.rancher-node-production.id}"
However when running terraform plan:
Error: resource 'aws_eip.rancher-node-production-eip' config: "rancher-node-production.id" is not a valid output for module "instance"
Think of modules as black boxes that you can't "reach" into. To get data out of a module, that module needs to export that data with an output. So in your case, you need to declare the rancher-node-production id as an output of the instance module.
If you look at the error you're getting, that's exactly what it's saying: rancher-node-production.id is not a valid output of the module (because you never defined it as an output).
Anyway here's what it would look like.
# instance.tf
resource "aws_instance" "rancher-node-production" {}
output "rancher-node-production" {
value = {
id = "${aws_instance.rancher-node-production.id}"
}
}
Hope that fixes it for you.