I am creating AWS SQS queues using Terraform. for each service, i need to create two queues, one normal queue and one error queue. The settings for each are mostly the same, but i need to create the error queue first so i can pass its ARN to the normal queue as part of its redrive policy. Instead of creating 10 modules there has to be a better way to loop through replacing just the names. So programming logic... foreach queue in queue_prefixes, create error module, then regular module. Im sure im just not searching right or asking the right question.
sandbox/main.tf
provider "aws" {
region = "us-west-2"
}
module "hfd_sqs_error_sandbox" {
source = "../"
for_each = var.queue_prefixes
name= each.key+"_Error"
}
module "hfd_sqs_sandbox" {
source = "../"
name=hfd_sqs_error_sandbox.name
redrive_policy = jsonencode({
deadLetterTargetArn = hfd_sqs_error_sandbox_this_sqs_queue_arn,
maxReceiveCount = 3
})
}
variables.tf
variable "queue_prefixes" {
description = "Create these queues with the enviroment prefixed"
type = list(string)
default = [
"Clops",
"Document",
"Ledger",
"Log",
"Underwriting",
"Wallet",
]
}
You may want to consider adding a wrapper module that creates both Normal Queue and Dead-Letter Queue. That would make creating resources in order much easier.
Consider this example (with null resources for easy testing):
Root module creating all queues:
# ./main.tf
locals {
queue_prefixes = [
"Queue_Prefix_1",
"Queue_Prefix_2",
]
}
module queue_set {
source = "./modules/queue_set"
for_each = toset(local.queue_prefixes)
name = each.key
}
Wrapper module creating a set of 2 queues: normal + dlq:
# ./modules/queue_set/main.tf
variable "name" {
type = string
}
module dlq {
source = "../queue"
name = "${var.name}_Error"
}
module queue {
source = "../queue"
name = var.name
redrive_policy = module.dlq.id
}
Individual queue resource suitable to create both types of queues:
# ./modules/queue/main.tf
variable "name" {
type = string
}
variable "redrive_policy" {
type = string
default = ""
}
resource "null_resource" "queue" {
provisioner "local-exec" {
command = "echo \"Created queue ${var.name}, redrive policy: ${var.redrive_policy}\""
}
# this is irrelevant to the question, it's just to make null resource change every time
triggers = {
always_run = timestamp()
}
}
output "id" {
value = null_resource.queue.id
}
Now if we run this stack, we can see the resources created in the correct order:
Related
I realised that terraform modules are recreating its resources per module declaration. So the way to reference a resource created in a module can only be done from the module, if it's defined as output. I'm looking for a way where I can reuse a module not in the way so it's recreating resources.
Imagine a scenario where I have three terraform modules.
One is creating an IAM policy (AWS), second is creating an IAM role, third is creating a different IAM role, and both roles share the same IAM policy.
In code:
# policy
resource "aws_iam_policy" "secrets_manager_read_policy" {
name = "SecretsManagerRead"
description = "Read only access to secrets manager"
policy = {} # just to shorten demonstration
}
output "policy" {
value = aws_iam_policy.secrets_manager_read_policy
}
# test-role-1
resource "aws_iam_role" "test_role_1" {
name = "test-role-1"
assume_role_policy = jsonencode({
Version = "2012-10-17"
Statement = [
{
Action = "sts:AssumeRole"
Effect = "Allow"
Sid = ""
Principal = {
Service = "ecs-tasks.amazonaws.com"
}
},
]
})
}
module "policy" {
source = "../test-policy"
}
resource "aws_iam_role_policy_attachment" "attach_secrets_manager_read_to_role" {
role = aws_iam_role.test_role_1.name
policy_arn = module.policy.policy.arn
}
# test-role-2
resource "aws_iam_role" "test_role_2" {
name = "test-role-2"
assume_role_policy = jsonencode({
Version = "2012-10-17"
Statement = [
{
Action = "sts:AssumeRole"
Effect = "Allow"
Sid = ""
Principal = {
Service = "ecs-tasks.amazonaws.com"
}
},
]
})
}
module "policy" {
source = "../test-policy"
}
resource "aws_iam_role_policy_attachment" "attach_secrets_manager_read_to_role" {
role = aws_iam_role.test_role_2.name
policy_arn = module.policy.policy.arn
}
# create-roles
module "role-1" {
source = "../../../modules/resources/test-role-1"
}
module "role-2" {
source = "../../../modules/resources/test-role-2"
}
In this scenario terraform is trying to create two policies for each user, but I want them to use the same resource.
Is there a way to keep the code clean, so not all resources are in the same file so that a resource is identified, and the same resource can be used in multiple modules? Or it's a tree like structure where sibling modules cannot share the same child? Yes, I could define the policy first, and pass down the properties needed to child modules where I create the users, but what if I want to have a many to many relationship between them so multiple roles share the same multiple policies?
I can think of a few ways to do this:
Option 1: Move the use of the policy module up to the parent level, and have your parent (root) Terraform code look like this:
# create-policy
module "my-policy" {
source = "../../../modules/resources/policy"
}
# create-roles
module "role-1" {
source = "../../../modules/resources/test-role-1"
policy = module.my-policy.policy
}
module "role-2" {
source = "../../../modules/resources/test-role-2"
policy = module.my-policy.policy
}
Option 2: Output the policy from the role modules, and also make it an optional input variable of the modules:
variable "policy" {
default = null # Make the variable optional
}
module "policy" {
# Create the policy, only if one wasn't passed in
count = var.policy == null ? 1 : 0
source = "../test-policy"
}
locals {
# Create a variable with the value of either the passed-in policy,
# or the one we are creating
my-policy = var.policy == null ? module.policy[0].policy : var.policy
}
resource "aws_iam_role_policy_attachment" "attach_secrets_manager_read_to_role" {
role = aws_iam_role.test_role_2.name
policy_arn = local.my-policy
}
output "policy" {
value = locals.my-policy
}
Then your root code could look like this:
module "role-1" {
source = "../../../modules/resources/test-role-1"
}
module "role-2" {
source = "../../../modules/resources/test-role-2"
policy = module.role-1.policy
}
The first module wouldn't get an input, so it would create a new policy. The second module would get an input, so it would use it instead of re-creating the policy.
I also highly recommend looking at the source code for some of the official AWS Terraform modules, like this one. Reading the source code for those really helped me understand how to create reusable Terraform modules.
Currently I have a powershell script that reads a yaml config file with all the objects I need created and creates a .tfvars file which contains all the variables, maps, lists of maps etc.
It would be something like the following:
global_tags = {
Provisioner = "Terraform"
}
resource_groups = {
myrg1 = {
location = "uksouth",
tags = {
ResourceType = "resourcegroup"
}
}
}
storage_accounts = {
mystorage1 = {
resource_group_name = "myrg1",
location = "uksouth",
account_tier = "Standard",
account_replication_type = "GRS",
tags = {
ResourceType = "storageaccount"
}
containers_list = [
{ name = "test_private_x", access_type = "private" },
{ name = "test_blob_x", access_type = "blob" },
{ name = "test_container_x", access_type = "container" }
]
}
The idea is to then pump each list of maps into each module to create the resources, e.g. main.tf would be just:
module "resourcegroup" {
source = "./modules/azure-resourcegroup"
resource_groups = var.resource_groups
global_tags = var.global_tags
}
module "storageaccount" {
source = "./modules/azure-storageaccount"
depends_on = [module.resourcegroup]
storage_accounts = var.storage_accounts
global_tags = var.global_tags
}
Also, an example of a simple module would be:
resource "azurerm_resource_group" "rg" {
for_each = var.resource_groups
name = each.key
location = each.value.location
tags = lookup(each.value,"tags",null) == null ? var.global_tags : merge(var.global_tags,each.value.tags)
}
The issue is that writing a complex module, say around storage account, isn't too bad if you are just feeding in all the params, but feeding in a list of maps and writing a module to read that list and create multiple flattened lists to perform say 15 different calls (to create containers, shares, network rules etc.) is very complex.
Obviously the reason I want to use for_each loops in the modules is so that my main.tf doesn't have to call the module multiple times with hard coded values for say 50 storage accounts.
Just wondering if I am missing an obvious way to create complicated multiples of each resource type ?
I appreciate I could do separate modules for containers, shares etc and break the complex maps down into simpler ones to pass to the additional modules, but I was trying to just have 1 storage account module that could handle anything and be fed by a complex list of maps so main.tf did not need editing, I could just control the config completely via a .tfvars file
I am very new to terraform and had a task dropped upon me to create 2 AWS KMS keys.
So I am doing this:
resource "aws_kms_key" "ebs_encryption_key" {
description = "EBS encryption key"
... omitted for brevity ...
tags = merge(map(
"Name", format("%s-ebs-encryption-key", var.name_prefix),
"component", "kms",
"dataclassification","low",
), var.extra_tags)
}
resource "aws_kms_alias" "ebs_encryption_key" {
name = format("alias/%s-ebs-encryption-key", var.name_prefix)
target_key_id = aws_kms_key.ebs_encryption_key.key_id
}
# Repeated code!
resource "aws_kms_key" "rds_encryption_key" {
description = "RDS encryption key"
... omitted for brevity ...
tags = merge(map(
"Name", format("%s-rds-encryption-key", var.name_prefix),
"component", "kms",
"dataclassification","low",
), var.extra_tags)
}
resource "aws_kms_alias" "rds_encryption_key" {
name = format("alias/%s-rds-encryption-key", var.name_prefix)
target_key_id = "${aws_kms_key.rds_encryption_key.key_id}"
}
As you can see the only difference between the two blocks of code is "ebs" and "rds"?
How could I use a for loop to avoid repeating the code blocks?
This seems like it could be a candidate for a small module that encapsulates the details of declaring a key and an associated alias, since a key and an alias are typically declared together in your system.
The module itself would look something like this:
variable "name" {
type = string
}
variable "description" {
type = string
}
variable "tags" {
type = map(string)
}
resource "aws_kms_key" "main" {
description = var.description
# ...
tags = var.tags
}
resource "aws_kms_alias" "main" {
name = "alias/${var.name}"
target_key_id = aws_kms_key.main.key_id
}
output "key_id" {
value = aws_kms_key.main.key_id
}
output "alias_name" {
value = aws_kms_alias.main.name
}
(As written here this module feels a little silly because there's not really much here that isn't derived only from the variables, but I'm assuming that the interesting stuff you want to avoid repeating is in "omitted for brevity" in your example, which would go in place of # ... in my example.)
Your calling module can then include a module block that uses for_each to create two instances of the module, systematically setting the arguments to populate its input variables:
module "kms_key" {
for_each = {
kms = "KMS"
ebs = "EBS"
}
name = "${var.name_prefix}-${each.key}-encryption-key"
description = "${each.value} Encryption Key"
tags = merge(
var.extra_tags,
{
Name = "${var.name_prefix}-${each.key}-encryption-key"
component = "kms"
dataclassification = "low"
},
)
}
Since the for_each map here has the keys kms and ebs, the result of this will be to declare resource instances which should have the following addresses in the plan:
module.kms_key["kms"].aws_kms_key.main
module.kms_key["kms"].aws_kms_alias.main
module.kms_key["ebs"].aws_kms_key.main
module.kms_key["ebs"].aws_kms_alias.main
Since they are identified by the map keys, you can add new keys to that map in future to create new key/alias pairs without disturbing the existing ones.
If you need to use the key IDs or alias names elsewhere in your calling module then you can access them via the outputs exposed in module.kms_key elsewhere in that calling module:
module.kms_key["kms"].key_id
module.kms_key["kms"].alias_name
module.kms_key["ebs"].key_id
module.kms_key["ebs"].alias_name
I am writing TF code to create multiple disks in GCP. The aim is to have dry code and have a list as an input.
My var app_disks has the following definition
variable "app_disks" {
type = list(object({
name = string
size = number
}))
}
And in my main.tf, im using the variable like this
app_disks = [
{
name = loki
size = 200
},
{
name = repo
size = 100
}
]
And in my module, my disk.tf looks like this
locals {
app_disk_map = {
for disk in var.app_disks : "${disk.name}" => disk
}
}
resource "google_compute_resource_policy" "app_disk_backup" {
for_each = local.app_disk_map
name = "${each.value.name}-backup"
snapshot_schedule_policy {
schedule {
hourly_schedule {
hours_in_cycle = 8
start_time = "04:00"
}
}
retention_policy {
max_retention_days = 14
on_source_disk_delete = "APPLY_RETENTION_POLICY"
}
}
}
resource "google_compute_disk" "app_disk" {
for_each = local.app_disk_map
provider = google-beta
name = each.value.name
zone = "${var.region}-a"
size = each.value.size
resource_policies = [each.google_compute_resource_policy.app_disk_backup[${each.value.name}-backup].self_link]
}
What im not sure about it how to link the resource_policies of the disk to its relevant google_compute_resource_policy".
Ive tried combinations like
each.google_compute_resource_policy.app_disk_backup[${each.value.name}-backup].self_link
each.google_compute_resource_policy.app_disk_backup."${each.value.name}-backup".self_link
But none seem to be working
I am not completely sure if I get the problem right (as an error output is missing), but from what I understood you want to have the following reference: google_compute_resource_policy.app_disk_backup[each.key].self_link so the resource would look something like:
resource "google_compute_disk" "app_disk" {
for_each = local.app_disk_map
....
resource_policies = [google_compute_resource_policy.app_disk_backup[each.key].self_link]
}
this will reference the same key that was used to create the dependent resource and create a 1:1 mapping between dependencies.
I have TF templates whose purpose is to create multiple copies of the same cloud infrastructure. For example you have multiple business units inside a big organization, and you want to build out the same basic networks. Or you want an easy way for a developer to spin up the stack that he's working on. The only difference between "tf apply" invokations is the variable BUSINESS_UNIT, for example, which is passed in as an environment variable.
Is anyone else using a system like this, and if so, how do you manage the state files ?
You should use a Terraform Module. Creating a module is nothing special: just put any Terraform templates in a folder. What makes a module special is how you use it.
Let's say you put the Terraform code for your infrastructure in the folder /terraform/modules/common-infra. Then, in the templates that actually define your live infrastructure (e.g. /terraform/live/business-units/main.tf), you could use the module as follows:
module "business-unit-a" {
source = "/terraform/modules/common-infra"
}
To create the infrastructure for multiple business units, you could use the same module multiple times:
module "business-unit-a" {
source = "/terraform/modules/common-infra"
}
module "business-unit-b" {
source = "/terraform/modules/common-infra"
}
module "business-unit-c" {
source = "/terraform/modules/common-infra"
}
If each business unit needs to customize some parameters, then all you need to do is define an input variable in the module (e.g. under /terraform/modules/common-infra/vars.tf):
variable "business_unit_name" {
description = "The name of the business unit"
}
Now you can set this variable to a different value each time you use the module:
module "business-unit-a" {
source = "/terraform/modules/common-infra"
business_unit_name = "a"
}
module "business-unit-b" {
source = "/terraform/modules/common-infra"
business_unit_name = "b"
}
module "business-unit-c" {
source = "/terraform/modules/common-infra"
business_unit_name = "c"
}
For more information, see How to create reusable infrastructure with Terraform modules and Terraform: Up & Running.
There's two ways of doing this that jump to mind.
Firstly, you could go down the route of using the same Terraform configuration folder that you apply and simply pass in a variable when running Terraform (either via the command line or through environment variables). You'd also want to have the same wrapper script that calls Terraform to configure your state settings to make them differ.
This might end up with something like this:
variable "BUSINESS_UNIT" {}
variable "ami" { default = "ami-123456" }
resource "aws_instance" "web" {
ami = "${var.ami}"
instance_type = "t2.micro"
tags {
Name = "web"
Business_Unit = "${var.BUSINESS_UNIT}"
}
}
resource "aws_db_instance" "default" {
allocated_storage = 10
engine = "mysql"
engine_version = "5.6.17"
instance_class = "db.t2.micro"
name = "${var.BUSINESS_UNIT}"
username = "foo"
password = "bar"
db_subnet_group_name = "db_subnet_group"
parameter_group_name = "default.mysql5.6"
}
Which creates an EC2 instance and an RDS instance. You would then call that with something like this:
#!/bin/bash
if [ "$#" -ne 1 ]; then
echo "Illegal number of parameters - specify business unit as positional parameter"
fi
business_unit=$1
terraform remote config -backend="s3" \
-backend-config="bucket=${business_unit}" \
-backend-config="key=state"
terraform remote pull
terraform apply -var 'BUSINESS_UNIT=${business_unit}'
terraform remote push
As an alternative route you might want to consider using modules to wrap your Terraform configuration.
So instead you might have something that now looks like:
web-instance/main.tf
variable "BUSINESS_UNIT" {}
variable "ami" { default = "ami-123456" }
resource "aws_instance" "web" {
ami = "${var.ami}"
instance_type = "t2.micro"
tags {
Name = "web"
Business_Unit = "${var.BUSINESS_UNIT}"
}
}
db-instance/main.tf
variable "BUSINESS_UNIT" {}
resource "aws_db_instance" "default" {
allocated_storage = 10
engine = "mysql"
engine_version = "5.6.17"
instance_class = "db.t2.micro"
name = "${var.BUSINESS_UNIT}"
username = "foo"
password = "bar"
db_subnet_group_name = "db_subnet_group"
parameter_group_name = "default.mysql5.6"
}
And then you might have different folders that call these modules per business unit:
business-unit-1/main.tf
variable "BUSINESS_UNIT" { default = "business-unit-1" }
module "web_instance" {
source = "../web-instance"
BUSINESS_UNIT = "${var.BUSINESS_UNIT}"
}
module "db_instance" {
source = "../db-instance"
BUSINESS_UNIT = "${var.BUSINESS_UNIT}"
}
and
business-unit-2/main.tf
variable "BUSINESS_UNIT" { default = "business-unit-2" }
module "web_instance" {
source = "../web-instance"
BUSINESS_UNIT = "${var.BUSINESS_UNIT}"
}
module "db_instance" {
source = "../db-instance"
BUSINESS_UNIT = "${var.BUSINESS_UNIT}"
}
You still need a wrapper script to manage state configuration as before but going this route enables you to provide a rough template in your modules and then hard code certain extra configuration by business unit such as the instance size or the number of instances that are built for them.
This is rather popular use case. To archive this you can let developers to pass variable from command-line or from tfvars file into resource to make different resources unique:
main.tf:
resource "aws_db_instance" "db" {
identifier = "${var.BUSINESS_UNIT}"
# ... read more in docs
}
$ terraform apply -var 'BUSINESS_UNIT=unit_name'
PS: We do this often to provision infrastructure for specific git branch name, and since all resources are identifiable and are located in separate tfstate files, we can safely destroy them when we don't need them.