I've created a straighforward module:
.
├── inputs.tf
└── main.tf
Input variables are declared in inputs.tf:
variable "workers" {
type = number
description = "Amount of spark workers"
}
variable "values_values_path" {}
main.tf is:
resource "helm_release" "spark" {
name = "spark"
repository = "https://charts.bitnami.com/bitnami"
chart = "spark"
version = "1.2.21"
namespace = ...
set {
name = "worker.replicaCount"
value = var.workers
}
values = [
"${file("${var.custom_values_path}")}"
]
}
As you can see I'm trying to deploy a helm release. I'd like to set a custom values file parameterized as custom_values_path:
.
├── main.tf
├── provider.tf
└── spark-values.yaml
My main.tf here is:
module "spark" {
source = "../modules/spark"
workers = 1
custom_values_path = "./spark_values.yaml"
}
However, I'm getting:
Error: Error in function call
on ../modules/spark/main.tf line 14, in resource "helm_release" "spark":
14: "${file("${var.custom_values_path}")}"
|----------------
| var.custom_values_path is "./spark_values.yaml"
Call to function "file" failed: no file exists at spark_values.yaml.
Complete directory structure is:
.
├── stash
│ ├── main.tf
│ ├── provider.tf
│ └── spark-values.yaml
└── modules
└── spark
├── inputs.tf
└── main.tf
When I perform terraform plan I'm in ./stash.
So complete commands would be:
$ > cd ./stash
$ stash > terraform plan
Error: Error in function call
on ../modules/spark/main.tf line 14, in resource "helm_release" "spark":
14: "${file("${var.custom_values_path}")}"
|----------------
| var.custom_values_path is "./spark_values.yaml"
Call to function "file" failed: no file exists at spark_values.yaml.
Why do I get Call to function "file" failed: no file exists?
Since you are referring to a file in the calling module from a child module, you should provide an absolute path based on the calling module's path using path.module as follows:
module "spark" {
source = "../modules/spark"
workers = 1
custom_values_path = "${path.module}/spark_values.yaml"
}
I recommend against referring to files across module boundaries in Terraform. You are better off keeping dependencies between modules to variables only to avoid weird issues like this. An alternative is to provide the entire file as a variable (this is what the file function does anyway).
module "spark" {
source = "../modules/spark"
workers = 1
custom_values = file("${path.module}/spark_values.yaml")
}
And then modify your Spark module to expect custom_values with the content rather than the path of the file:
resource "helm_release" "spark" {
name = "spark"
repository = "https://charts.bitnami.com/bitnami"
chart = "spark"
version = "1.2.21"
namespace = ...
set {
name = "worker.replicaCount"
value = var.workers
}
values = [
var.custom_values
]
}
Looking at that, I suspect the values parameters expects list(string) so you might need to use yamldecode on custom_values.
I think what I happening here is a problem with the relative path. You are passing the variable to the module and relative to the module path, the file spark_values.yaml is located at "../../stash/spark_values.yaml".
When working with the file module, I usually use ${path.module}. I would name the variable just as the file name: spark_values.yaml. Then I would call it as follows: file("${path.module}/${var.file_name}". Can you double check if that works for your case?
Related
I've created this folder structure:
.
├── main.tf
└── terragrunt.hcl
# FILE: terragrunt.hcl
include {
path = find_in_parent_folders()
}
locals {
common_vars = read_terragrunt_config(find_in_parent_folders("common.hcl"))
cluster_name = local.common_vars.locals.cluster_name
}
terraform {
source = "./main.tf"
}
# FILE: main.tf
module "tags" {
source = "..."
eks_cluster_names = [local.cluster_name]
}
module "vpc" {
source = "..."
aws_region = local.common_vars.locals.aws_region
...
vpc_custom_tags = module.tags.vpc_eks_tags
...
}
But for every local. I am trying to use I get an error:
A local value with the name "blabla" has not been declared
So now I am trying to figure out a way to make this work. I considered following how-to-access-terragrunt-variables-in-terraform-code, but I didn't want to create a variables.tf. Also, another problem is that I would have to redefine all outputs from modules in main.tf, isn't there a nicer way to do this?
Is there a structure that is a good practice I could follow? How could I "propagate" these locals in terragrunt.hcl to main.tf?
Sorry to disappoint but you do have to create a variables.tf - that is standard terraform. You define the input variables you need in there for your terraform configuration, and in terragrunt you fill these.
So your terragrunt file should look something like:
# FILE: terragrunt.hcl
locals {
common_vars = read_terragrunt_config(find_in_parent_folders("common.hcl"))
cluster_name = local.common_vars.locals.cluster_name
}
terraform {
source = "./main.tf"
}
inputs = {
cluster_name = local.cluster_name
aws_region = local.common_vars.locals.aws_region
}
And your terraform main should look like this:
# FILE: main.tf
module "tags" {
source = "..."
eks_cluster_names = var.cluster_name
}
module "vpc" {
source = "..."
aws_region = var.aws_region
...
vpc_custom_tags = module.tags.vpc_eks_tags
...
}
And your variables.tf would then look like:
variable "aws_region" {
type = string
}
variable "cluster_name" {
type = string
}
Additionally, you probably also need to create a provider.tf and a backend configuration to get this to run.
Terragrunt calls directly TF modules. Meaning get rid of main.tf and use just Terragrunt to wire your modules. There needs to be a separated subfolder (component) with terragrunt.hcl per TF module.
Your project structure will look like this:
.
├── terragrunt.hcl
├── tags
│ └── terragrunt.hcl
└── vpc
└── terragrunt.hcl
Feel free to have a look at how that works and how the variables are passed across the modules at my example here.
I am working on a 100% terraform project and I am trying to use the output value from one module into another module. Based on different StackOverflow posts, the most popular way to import the output from module a to module b is to reference the module a inside module b such as:
modules/b/main.tf
module "a" {
source = "./modules/a"
}
After that, you can access output variables from module a inside module b.
My project structure
├── main.tf # declaring all my modules here
├── modules
│ ├── accounts
│ │ ├── main.tf
│ │ └── variables.tf
│ └── organizations
│ ├── main.tf
│ ├── outputs.tf # the var. that I wanna use in accounts
│ └── variables.tf
├── providers.tf
├── variables.tf
└── versions.tf
So my issue is I am declaring all my modules in my main.tf
main.tf
module "organizations" {
source = "./modules/organizations"
}
module "accounts" {
source = "./modules/accounts"
}
However, I need to use one output of module/organizations into module/accounts. And the only way I found to do that is to have (another) organizations module in my modules/accounts/main.tf
modules/accounts/main.tf
module "organizations" {
source = "../organizations"
}
resource "aws_organizations_account" "this" {
name = "uuuu"
email = "udduu#gmail.com"
parent_id = module.organizations.sandbox_organizational_unit_id #HERE
}
But since I already have an organizations module in my main.tf, it's creating/deleting resources in my organization module twice.
organisations/main.tf
data "aws_organizations_organization" "root" {}
locals {
root_id = data.aws_organizations_organization.root.roots[0].id
}
resource "aws_organizations_organizational_unit" "sandboxs" {
name = var.aws_sandboxs_unit_name
parent_id = local.root_id
}
organisations/outputs.tf
output "sandbox_organizational_unit_id" {
value = aws_organizations_organizational_unit.sandboxs.id
description = "ID of the Sandboxs OU"
sensitive = false
}
Neither of your modules should explicitely refer to the other one. Instead, they should declare what kind of variable they expect as input (via a variable key), and what output they provide in return (via the output key)
Then in your main.tf, you can plug everything together:
module "organizations" {
source = "./modules/organizations"
some_variable = module.accounts.some_output
}
module "accounts" {
source = "./modules/accounts"
}
in "organizations", a some_variable must be declared as input: variable some_variable {}
in accounts , a some_output must be declared as output: output some_output { value = ... }
Anyone know how to get terragrunt to read tfvars files into dependent modules? If I declare all my tfvars as inputs in my root terragrunt.hcl, everything works fine, but of course then I can’t customize them by environment. I tried adding the extra_arguments block, but the variables aren’t declared in the root module. They’re declared in the dependent module and I don’t want to have to declare them in both places.
Here’s my setup:
// terraform/terragrunt.hcl
terraform {
extra_arguments "common_vars" {
commands = ["plan", "apply"]
arguments = [
"-var-file=${find_in_parent_folders("account.tfvars")}",
"-var-file=./terraform.tfvars"
]
}
}
locals {
environment_vars = read_terragrunt_config(find_in_parent_folders("account.hcl"))
bucket = local.environment_vars.locals.bucket
}
remote_state {
backend = "s3"
generate = {
path = "backend.tf"
if_exists = "overwrite_terragrunt"
}
config = {
key = "${path_relative_to_include()}/terraform.tfstate"
region = "us-east-1"
bucket = local.bucket
}
}
dependencies {
paths = ["../../../shared/services", "../../../shared/core"]
}
// terraform/accounts/dev/account.tfvars
aws_region = "us-east-1"
// terraform/accounts/dev/william/terraform.tfvars
aws_vpc_cidr = "10.1.0.0/16"
// terraform/accounts/dev/william/terragrunt.hcl
include {
path = find_in_parent_folders()
}
This doesn't work because the variable values don't actually get passed to the dependent modules. I got this back when I tried to run a terragrunt plan:
''' TERMINAL OUTPUT
$ terragrunt plan
No changes. Infrastructure is up-to-date.
This means that Terraform did not detect any differences between your
configuration and real physical resources that exist. As a result, no
actions need to be performed.
Warning: Value for undeclared variable
The root module does not declare a variable named
"aws_region" but a value was found in file
"/Users/williamjeffries/code/parachute/infrastructure/terraform/accounts/dev/account.tfvars".
To use this value, add a "variable" block to the configuration.
Using a variables file to set an undeclared variable is deprecated and will
become an error in a future release. If you wish to provide certain "global"
settings to all configurations in your organization, use TF_VAR_...
environment variables to set these instead.
Actually there were 26 such warnings, I’ve only pasted in one here but you get the idea. It seems like there should be some way to solve this with a terragrunt generate block but I'm not sure how. Any ideas?
I have been following the documentation here where it suggested to have directory structure:
live
├── prod
│ ├── app
│ │ └── terragrunt.hcl
│ ├── mysql
│ │ └── terragrunt.hcl
│ └── vpc
│ └── terragrunt.hcl
├── qa
│ ├── app
│ │ └── terragrunt.hcl
etc...
and
# content of qa/app/terragrunt.hcl
terraform {
# Deploy version v0.0.3 in qa
source = "git::git#github.com:foo/modules.git//app?ref=v0.0.3"
}
inputs = {
# tfvars for qa
instance_count = 3
instance_type = "t2.micro"
}
and
# content of prod/app/terragrunt.hcl
terraform {
# Deploy version v0.0.3 in prod
source = "git::git#github.com:foo/modules.git//app?ref=v0.0.3"
}
inputs = {
# tfvars for prod
instance_count = 20
instance_type = "t2.2xlarge"
}
and then the source could be within the same git repo., i.e: just app directory. You could then customize module app by different environment (and even different versions in different environments)
I am using terraform via terragrunt.
I have a folder with a single terragrunt.hcl file in it. The purpose of this file is to create multiple subnetworks in GCP.
To create a subnetwork, I have a module that takes several inputs.
I want to be able to create several subnetworks in my terragrunt.hcl file.
I think the best way would be to create a list with dictionaries (or maps as terraform call them) and then iterate over them.
I have some code that is not working
Here is some non-working code.
#terragrunt.hcl
include {
path = find_in_parent_folders()
}
inputs = {
# Common tags to be assigned to all resources
subnetworks = [
{
"subnetName": "subnet1-euw"
"subNetwork": "10.2.0.0/16"
"region": "europe-west1"
},
{
"subnetName": "subnet1-usc1"
"subNetwork": "10.3.0.0/16"
"region": "us-central1"
}
]
}
terraform {
module "subnetworks" {
source = "github.com/MyProject/infrastructure-modules.git//vpc/subnetwork"
vpc_name = "MyVPC"
vpc_subnetwork_name = [for network in subnetworks: network.subnetName]
vpc_subnetwork_cidr = [for network in subnetworks: network.subNetwork]
vpc_subnetwork_region = [for network in subnetworks: network.region]
}
}
Seems I cannot use "module" inside the "terraform" block. Hopefully the code at least show what I want to achieve.
For reference, the module I am calling looks like this
#main.tf
terraform {
# Intentionally empty. Will be filled by Terragrunt.
backend "gcs" {}
}
resource "google_compute_subnetwork" "vpc_subnetwork" {
name = var.vpc_subnetwork_name
ip_cidr_range = var.vpc_subnetwork_cidr
region = var.vpc_subnetwork_region
network = var.vpc_name
}
#variables.tf
variable "vpc_name" {
description = "Name of VPC"
type = string
}
variable "vpc_subnetwork_name" {
description = "Name of subnetwork"
type = string
}
variable "vpc_subnetwork_cidr" {
description = "Subnetwork CIDR"
type = string
}
variable "vpc_subnetwork_region" {
description = "Subnetwork region"
type = string
}
Terragrunt does not have a loop construct. In Terragrunt, you'd use a directory hierarchy to do what you want here. For example, to achieve your goals above, something like this:
└── live
├── empty.yaml
├── euw
│ ├── region.yaml
│ └── vpc
│ └── terragrunt.hcl
├── terragrunt.hcl
└── usc1
├── region.yaml
└── vpc
└── terragrunt.hcl
Within live/terragrunt.hcl, you make the other yaml files available within the terragrunt configuration:
# live/terragrunt.hcl
inputs = merge(
# Configure Terragrunt to use common vars encoded as yaml to help you keep often-repeated variables (e.g., account ID)
# DRY. We use yamldecode to merge the maps into the inputs, as opposed to using varfiles due to a restriction in
# Terraform >=0.12 that all vars must be defined as variable blocks in modules. Terragrunt inputs are not affected by
# this restriction.
yamldecode(
file("${get_terragrunt_dir()}/${find_in_parent_folders("region.yaml", "${path_relative_from_include()}/empty.yaml")}"),
)
)
In the region.yaml within each region, you simply state the region:
# live/euw/region.yaml
# These variables apply to this entire region. They are automatically pulled in using the extra_arguments
# setting in the root terraform.tfvars file's Terragrunt configuration.
region: "europe-west1"
# live/usc1/region.yaml
region: "us-central1"
Now you can refer to the region in your per-region terragrunt.hcl files as a variable:
# live/euw/vpc/terragrunt.hcl
terraform {
source = "github.com/MyProject/infrastructure-modules.git//vpc/subnetwork"
}
include {
path = find_in_parent_folders()
}
inputs = {
vpc_subnetwork_name = "subnet1-${region}"
vpc_subnetwork_cidr = "10.2.0.0/16"
vpc_subnetwork_region = region
vpc_name = "MyVPC"
}
Also:
# live/usc1/vpc/terragrunt.hcl
terraform {
source = "github.com/MyProject/infrastructure-modules.git//vpc/subnetwork"
}
include {
path = find_in_parent_folders()
}
inputs = {
vpc_subnetwork_name = "subnet1-${region}"
vpc_subnetwork_cidr = "10.3.0.0/16"
vpc_subnetwork_region = region
vpc_name = "MyVPC"
}
You might find the example terragrunt repository from Gruntwork helpful.
folder structure.
I am creating the following for 2 seperate applications using same modules in terragrunt
LB
Instances
Security Groups
my question is how do I reference a security group created for app1 in app2?
eg.
in app1
I can references it as
security_groups = ["${aws_security_group.sec_group_A.id}"]
how can I refer the same security group in app2?
resource "aws_security_group" "sec_group_A" {
name = "sec_group_A"
...
...
}
resource "aws_elb" "bar" {
name = "foobar-terraform-elb"
security_groups = ["${aws_security_group.sec_group_A.id}"]
...
...
}
In app2, you can:
data "aws_security_group" "other" {
name = "sec_group_A"
}
and then use the ID:
resource "aws_elb" "bar" {
name = "foobar-terraform-elb"
security_groups = ["${data.aws_security_group.other.id}"]
...
...
}
(caveat for using data is that you are running two separate terraform applys - one configuration creates the group, and other configuration references the group).
I have no experience of using terragrunt, but normally I would be calling my modules from a "main.tf" file in the root of the project. An example folder structure is below
.
├── main.tf
└── modules
├── app1
│ ├── main.tf
│ ├── outputs.tf
│ └── variables.tf
└── app2
├── main.tf
├── outputs.tf
└── variables.tf
My app1 outputs.tf declares a security group A output
output "sec_group_a" { value = "${aws_security_group.sec_group_A}" }
I can then call this output in my main.tf file in the root of the project. This would look something like the below
module "app1" {
source = "./modules/app1"
...
// Pass in my variables
}
module "app2" {
source = "./modules/app2"
sec_group_A = "${module.app1.sec_group_A}"
...
//Pass in the rest of my variables
}
Finally inside of the app2 module you can call this as you would any other variable.
resource "aws_elb" "bar" {
name = "foobar-terraform-elb"
security_groups = ["${var.sec_group_A.id}"]
...
...
}
I'd read up on modules here https://www.terraform.io/docs/modules/index.html to get a better understanding of how they fit together.
Alternatively you can grab the data from your remote state (if you have one configured) as long as sec_group_A declared as an output in app1. See https://www.terraform.io/docs/providers/terraform/d/remote_state.html