Create multiple objects from one Terraform module using Terragrunt - terraform

I am using terraform via terragrunt.
I have a folder with a single terragrunt.hcl file in it. The purpose of this file is to create multiple subnetworks in GCP.
To create a subnetwork, I have a module that takes several inputs.
I want to be able to create several subnetworks in my terragrunt.hcl file.
I think the best way would be to create a list with dictionaries (or maps as terraform call them) and then iterate over them.
I have some code that is not working
Here is some non-working code.
#terragrunt.hcl
include {
path = find_in_parent_folders()
}
inputs = {
# Common tags to be assigned to all resources
subnetworks = [
{
"subnetName": "subnet1-euw"
"subNetwork": "10.2.0.0/16"
"region": "europe-west1"
},
{
"subnetName": "subnet1-usc1"
"subNetwork": "10.3.0.0/16"
"region": "us-central1"
}
]
}
terraform {
module "subnetworks" {
source = "github.com/MyProject/infrastructure-modules.git//vpc/subnetwork"
vpc_name = "MyVPC"
vpc_subnetwork_name = [for network in subnetworks: network.subnetName]
vpc_subnetwork_cidr = [for network in subnetworks: network.subNetwork]
vpc_subnetwork_region = [for network in subnetworks: network.region]
}
}
Seems I cannot use "module" inside the "terraform" block. Hopefully the code at least show what I want to achieve.
For reference, the module I am calling looks like this
#main.tf
terraform {
# Intentionally empty. Will be filled by Terragrunt.
backend "gcs" {}
}
resource "google_compute_subnetwork" "vpc_subnetwork" {
name = var.vpc_subnetwork_name
ip_cidr_range = var.vpc_subnetwork_cidr
region = var.vpc_subnetwork_region
network = var.vpc_name
}
#variables.tf
variable "vpc_name" {
description = "Name of VPC"
type = string
}
variable "vpc_subnetwork_name" {
description = "Name of subnetwork"
type = string
}
variable "vpc_subnetwork_cidr" {
description = "Subnetwork CIDR"
type = string
}
variable "vpc_subnetwork_region" {
description = "Subnetwork region"
type = string
}

Terragrunt does not have a loop construct. In Terragrunt, you'd use a directory hierarchy to do what you want here. For example, to achieve your goals above, something like this:
└── live
├── empty.yaml
├── euw
│   ├── region.yaml
│   └── vpc
│   └── terragrunt.hcl
├── terragrunt.hcl
└── usc1
├── region.yaml
└── vpc
└── terragrunt.hcl
Within live/terragrunt.hcl, you make the other yaml files available within the terragrunt configuration:
# live/terragrunt.hcl
inputs = merge(
# Configure Terragrunt to use common vars encoded as yaml to help you keep often-repeated variables (e.g., account ID)
# DRY. We use yamldecode to merge the maps into the inputs, as opposed to using varfiles due to a restriction in
# Terraform >=0.12 that all vars must be defined as variable blocks in modules. Terragrunt inputs are not affected by
# this restriction.
yamldecode(
file("${get_terragrunt_dir()}/${find_in_parent_folders("region.yaml", "${path_relative_from_include()}/empty.yaml")}"),
)
)
In the region.yaml within each region, you simply state the region:
# live/euw/region.yaml
# These variables apply to this entire region. They are automatically pulled in using the extra_arguments
# setting in the root terraform.tfvars file's Terragrunt configuration.
region: "europe-west1"
# live/usc1/region.yaml
region: "us-central1"
Now you can refer to the region in your per-region terragrunt.hcl files as a variable:
# live/euw/vpc/terragrunt.hcl
terraform {
source = "github.com/MyProject/infrastructure-modules.git//vpc/subnetwork"
}
include {
path = find_in_parent_folders()
}
inputs = {
vpc_subnetwork_name = "subnet1-${region}"
vpc_subnetwork_cidr = "10.2.0.0/16"
vpc_subnetwork_region = region
vpc_name = "MyVPC"
}
Also:
# live/usc1/vpc/terragrunt.hcl
terraform {
source = "github.com/MyProject/infrastructure-modules.git//vpc/subnetwork"
}
include {
path = find_in_parent_folders()
}
inputs = {
vpc_subnetwork_name = "subnet1-${region}"
vpc_subnetwork_cidr = "10.3.0.0/16"
vpc_subnetwork_region = region
vpc_name = "MyVPC"
}
You might find the example terragrunt repository from Gruntwork helpful.

Related

How can I use locals defined in terragrunt.hcl in Terraform files?

I've created this folder structure:
.
├── main.tf
└── terragrunt.hcl
# FILE: terragrunt.hcl
include {
path = find_in_parent_folders()
}
locals {
common_vars = read_terragrunt_config(find_in_parent_folders("common.hcl"))
cluster_name = local.common_vars.locals.cluster_name
}
terraform {
source = "./main.tf"
}
# FILE: main.tf
module "tags" {
source = "..."
eks_cluster_names = [local.cluster_name]
}
module "vpc" {
source = "..."
aws_region = local.common_vars.locals.aws_region
...
vpc_custom_tags = module.tags.vpc_eks_tags
...
}
But for every local. I am trying to use I get an error:
A local value with the name "blabla" has not been declared
So now I am trying to figure out a way to make this work. I considered following how-to-access-terragrunt-variables-in-terraform-code, but I didn't want to create a variables.tf. Also, another problem is that I would have to redefine all outputs from modules in main.tf, isn't there a nicer way to do this?
Is there a structure that is a good practice I could follow? How could I "propagate" these locals in terragrunt.hcl to main.tf?
Sorry to disappoint but you do have to create a variables.tf - that is standard terraform. You define the input variables you need in there for your terraform configuration, and in terragrunt you fill these.
So your terragrunt file should look something like:
# FILE: terragrunt.hcl
locals {
common_vars = read_terragrunt_config(find_in_parent_folders("common.hcl"))
cluster_name = local.common_vars.locals.cluster_name
}
terraform {
source = "./main.tf"
}
inputs = {
cluster_name = local.cluster_name
aws_region = local.common_vars.locals.aws_region
}
And your terraform main should look like this:
# FILE: main.tf
module "tags" {
source = "..."
eks_cluster_names = var.cluster_name
}
module "vpc" {
source = "..."
aws_region = var.aws_region
...
vpc_custom_tags = module.tags.vpc_eks_tags
...
}
And your variables.tf would then look like:
variable "aws_region" {
type = string
}
variable "cluster_name" {
type = string
}
Additionally, you probably also need to create a provider.tf and a backend configuration to get this to run.
Terragrunt calls directly TF modules. Meaning get rid of main.tf and use just Terragrunt to wire your modules. There needs to be a separated subfolder (component) with terragrunt.hcl per TF module.
Your project structure will look like this:
.
├── terragrunt.hcl
├── tags
│   └── terragrunt.hcl
└── vpc
   └── terragrunt.hcl
Feel free to have a look at how that works and how the variables are passed across the modules at my example here.

how to get terragrunt to read tfvars files into dependent modules

Anyone know how to get terragrunt to read tfvars files into dependent modules? If I declare all my tfvars as inputs in my root terragrunt.hcl, everything works fine, but of course then I can’t customize them by environment. I tried adding the extra_arguments block, but the variables aren’t declared in the root module. They’re declared in the dependent module and I don’t want to have to declare them in both places.
Here’s my setup:
// terraform/terragrunt.hcl
terraform {
extra_arguments "common_vars" {
commands = ["plan", "apply"]
arguments = [
"-var-file=${find_in_parent_folders("account.tfvars")}",
"-var-file=./terraform.tfvars"
]
}
}
locals {
environment_vars = read_terragrunt_config(find_in_parent_folders("account.hcl"))
bucket = local.environment_vars.locals.bucket
}
remote_state {
backend = "s3"
generate = {
path = "backend.tf"
if_exists = "overwrite_terragrunt"
}
config = {
key = "${path_relative_to_include()}/terraform.tfstate"
region = "us-east-1"
bucket = local.bucket
}
}
dependencies {
paths = ["../../../shared/services", "../../../shared/core"]
}
// terraform/accounts/dev/account.tfvars
aws_region = "us-east-1"
// terraform/accounts/dev/william/terraform.tfvars
aws_vpc_cidr = "10.1.0.0/16"
// terraform/accounts/dev/william/terragrunt.hcl
include {
path = find_in_parent_folders()
}
This doesn't work because the variable values don't actually get passed to the dependent modules. I got this back when I tried to run a terragrunt plan:
''' TERMINAL OUTPUT
$ terragrunt plan
No changes. Infrastructure is up-to-date.
This means that Terraform did not detect any differences between your
configuration and real physical resources that exist. As a result, no
actions need to be performed.
Warning: Value for undeclared variable
The root module does not declare a variable named
"aws_region" but a value was found in file
"/Users/williamjeffries/code/parachute/infrastructure/terraform/accounts/dev/account.tfvars".
To use this value, add a "variable" block to the configuration.
Using a variables file to set an undeclared variable is deprecated and will
become an error in a future release. If you wish to provide certain "global"
settings to all configurations in your organization, use TF_VAR_...
environment variables to set these instead.
Actually there were 26 such warnings, I’ve only pasted in one here but you get the idea. It seems like there should be some way to solve this with a terragrunt generate block but I'm not sure how. Any ideas?
I have been following the documentation here where it suggested to have directory structure:
live
├── prod
│ ├── app
│ │ └── terragrunt.hcl
│ ├── mysql
│ │ └── terragrunt.hcl
│ └── vpc
│ └── terragrunt.hcl
├── qa
│ ├── app
│ │ └── terragrunt.hcl
etc...
and
# content of qa/app/terragrunt.hcl
terraform {
# Deploy version v0.0.3 in qa
source = "git::git#github.com:foo/modules.git//app?ref=v0.0.3"
}
inputs = {
# tfvars for qa
instance_count = 3
instance_type = "t2.micro"
}
and
# content of prod/app/terragrunt.hcl
terraform {
# Deploy version v0.0.3 in prod
source = "git::git#github.com:foo/modules.git//app?ref=v0.0.3"
}
inputs = {
# tfvars for prod
instance_count = 20
instance_type = "t2.2xlarge"
}
and then the source could be within the same git repo., i.e: just app directory. You could then customize module app by different environment (and even different versions in different environments)

Missing Terraform provider? What am I doing wrong? (Terraform v0.13.5)

As you can see below, I'm trying to pass a specific provider to a module, which then passes it as the main provider (aws = aws.some_profile) to a second, nested module.
on terraform plan I get the following error:
Error: missing provider module.module_a.provider["registry.terraform.io/hashicorp/aws"].some_profile
I must be doing somethig basic wrong or assuming the language works in a way that it doesn't. Ideas?
File structure:
├── main.tf
├── module_a
│ ├── main.tf
│ └── module_b
│ └── main.tf
└── providers.tf
main.tf (top level):
module "module_a" {
source = "./module_a"
providers = {
aws.some_profile = aws.some_profile
}
}
main.tf (inside module_a):
module "module_b" {
source = "./module_b"
providers = {
aws = aws.some_profile
}
}
main.tf (inside module b):
resource "null_resource" "null" {}
providers.tf:
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = ">= 3.22.0"
}
}
}
provider "aws" {
profile = "default"
region = "us-west-2"
}
provider "aws" {
alias = "some_profile"
profile = "some_profile"
region = "us-west-2"
}
Ok, so after getting some answers on Reddit, it looks like although you are passing providers down to submodules you still need to declare said providers in each submodule like so:
provider "aws" { alias = "some_provider" }
And it seems like the terraform "required providers" block is only required on the very top level. However, if it isn't working you can try adding it to each submodule as well.
Hope this helps someone out.

Reference to other module resource in Terraform

I have a hierarchy like this in my Terraform Cloud git project:
├── aws
│   ├── flavors
│   │   └── main.tf
│   ├── main.tf
│   ├── security-rules
│   │   └── sec-rule1
│   │   └── main.tf
│   └── vms
│ │ └── vm1
│   │   └── main.tf
└── main.tf
All main main.tf files contain module definitions with child folders:
/main.tf:
terraform {
required_version = "~> 0.12.0"
backend "remote" {
hostname = "app.terraform.io"
organization = "foo"
workspaces {
name = "bar"
}
}
required_providers {
openstack = "~> 1.24.0"
}
}
module "aws" {
source = "./aws"
}
/aws/main.tf:
module "security-rules" {
source = "./security-rules"
}
module "flavors" {
source = "./flavors"
}
module "vms" {
source = "./vms"
}
/aws/security-rules/main-tf:
module "sec-rule1" {
source = "./sec-rule1"
}
/aws/vms/main-tf:
module "vm1" {
source = "./vm1"
}
Then I have this security rule defined.
/aws/security-rules/sec-rule1/main-tf:
resource "openstack_compute_secgroup_v2" "sec-rule1" {
name = "sec-rule1"
description = "Allow web port"
rule {
from_port = 80
to_port = 80
ip_protocol = "tcp"
cidr = "0.0.0.0/0"
}
lifecycle {
prevent_destroy = false
}
}
And I want to reference it from one or more VMs, but I don't know how to reference by resource ID (or name). I use plain names instead of reference.
/aws/vms/vm1/main-tf:
resource "openstack_blockstorage_volume_v3" "vm1_volume" {
name = "vm1_volume"
size = 30
image_id = "foo-bar"
}
resource "openstack_compute_instance_v2" "vm1_instance" {
name = "vm1_instance"
flavor_name = "foo-bar"
key_pair = "foo-bar keypair"
image_name = "Ubuntu Server 18.04 LTS Bionic"
block_device {
uuid = "${openstack_blockstorage_volume_v3.vm1_volume.id}"
source_type = "volume"
destination_type = "volume"
boot_index = 0
delete_on_termination = false
}
network {
name = "SEG-tenant-net"
}
security_groups = ["default", "sec-rule1"]
config_drive = true
}
resource "openstack_networking_floatingip_v2" "vm1_fip" {
pool = "foo-bar"
}
resource "openstack_compute_floatingip_associate_v2" "vm1_fip" {
floating_ip = "${openstack_networking_floatingip_v2.vm1_fip.address}"
instance_id = "${openstack_compute_instance_v2.vm1_instance.id}"
}
I want to use security-rules (and more stuff) referencing by name or ID, because it would be more consistent. Besides when I create a new security rule and, at the same time, a VM, Terraform OpenStack provider plans it without error, but when applying it, an error is produced because VM is created first and it doesn't find not-yet created new security rule.
How can I do this?
You should make an output of sec_rule_allow_web_name for sec-rule1 and security-rules/ modules, then set the output of the security-rules/ module as an input of the vm1 and vms modules. This way you can keep a dependency of the vm1 module with the output of security_rules which is called Dependency Inversion.
# ./security-rules/<example>/outputs.tf
output "sec_rule_allow_web_name" {
value = "<some-resource-to-output>"
}
# ./vms/variables.tf
variable "security_rule_name" {}
Provided the outputs and inputs are defined in the correct modules.
# /aws/main.tf
# best practice to use underscores instead of dashes in names
# so security-roles/ directory is now called security_rules
module "security_rules" {
source = "./security-rules"
}
module "flavors" {
source = "./flavors"
}
module "vms" {
source = "./vms"
security_rule_name = module.security_rules.sec_rule_allow_web_name
}

Terraform s3 backend not used by module

I've begun working with Terraform and have totally bought into it - amazing! having created my entire Dev environment in terms of AWS VPC, subnets, NACLs, SGs, Route tables etc etc etc, I have decided that I had better turn this into reusable modules.
So now I have turned it into modules, with variables etc. Now my dev template simply takes variables and uses them as inputs to the module. I end up with this:
terraform {
backend "s3" {
bucket = "redacted"
key = "dev/vpc/terraform.tfstate"
region = "eu-west-1"
encrypt = true
dynamodb_table = "terraform_statelock_redacted"
}
}
provider "aws"{
access_key = ""
secret_key = ""
region = "eu-west-1"
}
module "base_vpc" {
source = "git#github.com:closed/terraform-modules.git//vpc"
vpc_cidr = "${var.vpc_cidr}"
vpc_region = "${var.vpc_region}"
Environment = "${var.Environment}"
Public-subnet-1a = "${var.Public-subnet-1a}"
Public-subnet-1b = "${var.Public-subnet-1b}"
Private-subnet-1a = "${var.Private-subnet-1a}"
Private-subnet-1b = "${var.Private-subnet-1b}"
Private-db-subnet-1a = "${var.Private-db-subnet-1a}"
Private-db-subnet-1b = "${var.Private-db-subnet-1b}"
Onsite-computers = "${var.Onsite-computers}"
browse_access = "${var.browse_access}"
}
Now I have all state managed in an s3 backend, as you can see in the above configuration. I also have other state files for services/instances that are running. My problem is that now that I have turned this into a module and referenced it as above, it wants to blow away my state! I was under the impression that it would import the module and run it whilst respecting other configuration. The actual module code was copied from the original template, so nothing has changed there.
Is there a reason it is trying to blow everything away and start again? How does one manage separate states per environment in the case of using modules? I get no other errors. I have devs working on some of the servers at the moment so I'm paralysed now ha!
I guess I've misunderstood something, any help much appreciated :)
Thanks.
Edit - Using Terraform 0.9.8
OK so I think the bit I misunderstood was the way that using modules changes the paths in the state file. I realised I was on the right line while reading the Terraform docs around state migration.
I found this great blog entry to assist me with getting my head around it:
https://ryaneschinger.com/blog/terraform-state-move/
No comment section for me to thank that guy! Anyway, after seeing how easy it was, I just output the terraform state list command to a text file for the main file. Used PowerShell to quickly iterate over these and write up the commands for the module I was moving them into. When I tried to execute the lines with this script I got the "Terraform has crashed!!!!" error, so just cut and paste them into my shell one by one. Proper nooby, but there was only 50 or so resources, so not that time consuming. SO glad I'm doing this at Dev stage rather than deciding to do it retrospectively in prod.
So I'm sorted. Thanks for your input though JBirdVegas.
We ran into this issue and decided each environment needed a code representation we could view at the same time as the others if needed, ie compare dev configs to qa.
So now we have a folder for dev and one for qa and we launch terraform from there. Each is basically a list of variables that calls modules for each component.
Here is my tree for a visual representation
$ tree terraform/
terraform/
├── api_gateway
│   ├── main.tf
│   ├── output.tf
│   └── variables.tf
├── database
│   ├── main.tf
│   ├── output.tf
│   └── variables.tf
├── dev
│   └── main.tf
├── ec2
│   ├── main.tf
│   ├── output.tf
│   └── variables.tf
├── kms
│   ├── main.tf
│   ├── output.tf
│   └── variables.tf
├── network
│   ├── main.tf
│   ├── output.tf
│   └── variables.tf
├── qa
│   └── main.tf
└── sns
   ├── output.tf
   ├── main.tf
   └── variables.tf
dev/main.tf and qa/main.tf import the modules provided by the other folders supplying environment specific configurations for each module.
EDIT: here is a sanitized version of my dev/main.tf
provider "aws" {
region = "us-east-1"
profile = "blah-dev"
shared_credentials_file = "${pathexpand("~/.aws/credentials")}"
}
terraform {
backend "s3" {
bucket = "sweet-dev-bucket"
key = "sweet/dev.terraform.tfstate"
region = "us-east-1"
profile = "blah-dev"
}
}
variable "aws_account" {
default = "000000000000"
}
variable "env" {
default = "dev"
}
variable "aws_region" {
default = "us-east-1"
}
variable "tag_product" {
default = "sweet"
}
variable "tag_business_region" {
default = "east"
}
variable "tag_business_unit" {
default = "my-department"
}
variable "tag_client" {
default = "some-client"
}
module build_env {
source = "../datasources"
}
module "kms" {
source = "../kms"
tag_client = "${var.tag_client}"
tag_business_region = "${var.tag_business_region}"
tag_business_unit = "${var.tag_business_unit}"
tag_product = "${var.tag_product}"
}
module "network" {
source = "../network"
vpc_id = "vpc-000a0000"
subnet_external_1B = "subnet-000a0000"
subnet_external_1D = "subnet-000a0001"
subnet_db_1A = "subnet-000a0002"
subnet_db_1B = "subnet-000a0003"
}
module "database" {
source = "../database"
env = "dev"
vpc_id = "${module.network.vpc_id}"
subnet_external_1B = "${module.network.subnet_external_1B}"
subnet_external_1D = "${module.network.subnet_external_1D}"
subnet_db_1A = "${module.network.subnet_db_1A}"
subnet_db_1B = "${module.network.subnet_db_1B}"
database_instance_size = "db.t2.small"
database_name = "my-${var.tag_product}-db"
database_user_name = "${var.tag_product}"
database_passwd = "${module.kms.passwd_plaintext}"
database_identifier = "${var.tag_product}-rds-database"
database_max_connections = "150"
}
module sns {
source = "../sns"
aws_account = "${var.aws_account}"
}
module "api_gateway" {
source = "../api_gateway"
env = "${var.env}"
vpc_id = "${module.network.vpc_id}"
domain_name = "${var.tag_product}-dev.example.com"
dev_certificate_arn = "arn:aws:acm:${var.aws_region}:${var.aws_account}:certificate/abcd0000-a000-a000-a000-1234567890ab"
aws_account = "${var.aws_account}"
aws_region = "${var.aws_region}"
tag_client = "${var.tag_client}"
tag_business_unit = "${var.tag_business_unit}"
tag_product = "${var.tag_product}"
tag_business_region = "${var.tag_business_region}"
autoscaling_events_sns_topic_arn = "${module.sns.sns_topic_arn}"
db_subnet_id_1 = "${module.network.subnet_db_1A}"
db_subnet_id_2 = "${module.network.subnet_db_1B}"
ec2_role = "${var.tag_product}-assume-iam-role"
kms_key_arn = "${module.kms.kms_arn}"
passwd_cypher_text = "${module.kms.passwd_cyphertext}"
}
module "ec2" {
source = "../ec2"
s3_bucket = "${var.tag_product}_dev_bucket"
aws_region = "${var.aws_region}"
env = "${var.env}"
ec2_key_name = "my-${var.tag_product}-key"
ec2_instance_type = "t2.micro"
aws_account = "${var.aws_account}"
vpc_id = "${module.network.vpc_id}"
binary_path = "${module.build_env.binary_path}"
binary_hash = "${module.build_env.binary_hash}"
git_hash_short = "${module.build_env.git_hash_short}"
private_key = "${format("%s/keys/%s-%s.pem", path.root, var.tag_product, var.env)}"
cloudfront_domain = "${module.api_gateway.cloudfront_domain}"
api_gateway_domain = "${module.api_gateway.api_gateway_cname}"
tag_client = "${var.tag_client}"
tag_business_region = "${var.tag_business_region}"
tag_product = "${var.tag_product}"
tag_business_unit = "${var.tag_business_unit}"
auto_scale_desired_capacity = "1"
auto_scale_max = "2"
auto_scale_min = "1"
autoscaling_events_sns_topic = "${module.sns.sns_topic_arn}"
subnet_external_b = "${module.network.subnet_external_b}"
subnet_external_a = "${module.network.subnet_external_a}"
kms_key_arn = "${module.kms.kms_arn}"
passwd_cypher_text = "${module.kms.passwd_cyphertext}"
}
Then my QA is basically the same (modify a couple vars at the top) however the most important difference is the very top of the qa/main.tf Mostly these vars:
provider "aws" {
region = "us-east-1"
profile = "blah-qa"
shared_credentials_file = "${pathexpand("~/.aws/credentials")}"
}
terraform {
backend "s3" {
bucket = "sweet-qa-bucket"
key = "sweet/qa.terraform.tfstate"
region = "us-east-1"
profile = "blah-qa"
}
}
variable "aws_account" {
default = "000000000001"
}
variable "env" {
default = "qa"
}
Using this our backend for dev and qa have different state files in different buckets in different aws accounts. Idk what your requirements are but this has satisfied most projects I've worked with, in fact we are expanding our usage of this model across my org.

Resources