How to share terraform resources between modules? - terraform

I realised that terraform modules are recreating its resources per module declaration. So the way to reference a resource created in a module can only be done from the module, if it's defined as output. I'm looking for a way where I can reuse a module not in the way so it's recreating resources.
Imagine a scenario where I have three terraform modules.
One is creating an IAM policy (AWS), second is creating an IAM role, third is creating a different IAM role, and both roles share the same IAM policy.
In code:
# policy
resource "aws_iam_policy" "secrets_manager_read_policy" {
name = "SecretsManagerRead"
description = "Read only access to secrets manager"
policy = {} # just to shorten demonstration
}
output "policy" {
value = aws_iam_policy.secrets_manager_read_policy
}
# test-role-1
resource "aws_iam_role" "test_role_1" {
name = "test-role-1"
assume_role_policy = jsonencode({
Version = "2012-10-17"
Statement = [
{
Action = "sts:AssumeRole"
Effect = "Allow"
Sid = ""
Principal = {
Service = "ecs-tasks.amazonaws.com"
}
},
]
})
}
module "policy" {
source = "../test-policy"
}
resource "aws_iam_role_policy_attachment" "attach_secrets_manager_read_to_role" {
role = aws_iam_role.test_role_1.name
policy_arn = module.policy.policy.arn
}
# test-role-2
resource "aws_iam_role" "test_role_2" {
name = "test-role-2"
assume_role_policy = jsonencode({
Version = "2012-10-17"
Statement = [
{
Action = "sts:AssumeRole"
Effect = "Allow"
Sid = ""
Principal = {
Service = "ecs-tasks.amazonaws.com"
}
},
]
})
}
module "policy" {
source = "../test-policy"
}
resource "aws_iam_role_policy_attachment" "attach_secrets_manager_read_to_role" {
role = aws_iam_role.test_role_2.name
policy_arn = module.policy.policy.arn
}
# create-roles
module "role-1" {
source = "../../../modules/resources/test-role-1"
}
module "role-2" {
source = "../../../modules/resources/test-role-2"
}
In this scenario terraform is trying to create two policies for each user, but I want them to use the same resource.
Is there a way to keep the code clean, so not all resources are in the same file so that a resource is identified, and the same resource can be used in multiple modules? Or it's a tree like structure where sibling modules cannot share the same child? Yes, I could define the policy first, and pass down the properties needed to child modules where I create the users, but what if I want to have a many to many relationship between them so multiple roles share the same multiple policies?

I can think of a few ways to do this:
Option 1: Move the use of the policy module up to the parent level, and have your parent (root) Terraform code look like this:
# create-policy
module "my-policy" {
source = "../../../modules/resources/policy"
}
# create-roles
module "role-1" {
source = "../../../modules/resources/test-role-1"
policy = module.my-policy.policy
}
module "role-2" {
source = "../../../modules/resources/test-role-2"
policy = module.my-policy.policy
}
Option 2: Output the policy from the role modules, and also make it an optional input variable of the modules:
variable "policy" {
default = null # Make the variable optional
}
module "policy" {
# Create the policy, only if one wasn't passed in
count = var.policy == null ? 1 : 0
source = "../test-policy"
}
locals {
# Create a variable with the value of either the passed-in policy,
# or the one we are creating
my-policy = var.policy == null ? module.policy[0].policy : var.policy
}
resource "aws_iam_role_policy_attachment" "attach_secrets_manager_read_to_role" {
role = aws_iam_role.test_role_2.name
policy_arn = local.my-policy
}
output "policy" {
value = locals.my-policy
}
Then your root code could look like this:
module "role-1" {
source = "../../../modules/resources/test-role-1"
}
module "role-2" {
source = "../../../modules/resources/test-role-2"
policy = module.role-1.policy
}
The first module wouldn't get an input, so it would create a new policy. The second module would get an input, so it would use it instead of re-creating the policy.
I also highly recommend looking at the source code for some of the official AWS Terraform modules, like this one. Reading the source code for those really helped me understand how to create reusable Terraform modules.

Related

Terraform passing list/set from root module to child module issue

I have this root module which calls the child module to create a GCP project and create IAM role bindings.
module "test_project" {
source = "terraform.dev.mydomain.com/Dev/sbxprjmodule/google"
version = "1.0.3"
short_name = "looker-nwtest"
owner_bindings = ["group:npe-cloud-platformeng-contractors#c.mydomain.com", "group:npe-sbox-rw-tfetraining#c.mydomain.com"]
}
variable "owner_bindings" {
type = list(string)
default = null
}
This is the child module which does the assignments
resource "google_project_iam_binding" "g-sbox-iam-owner" {
count = var.owner_bindings == null ? 0 : length(var.owner_bindings)
project = "${var.project_id}-${var.short_name}"
role = "roles/owner"
members = [var.owner_bindings[count.index]]
}
variable "owner_bindings" {
type = list(string)
default = null
}
/*
When I do a terraform plan and apply, it creates both the bindings properly, looping through twice. Then when I run a terraform plan again and apply, it shows this change below.
# module.lookernwtest_project.google_project_iam_binding.g-sbox-iam-owner[0] will be updated in-place
~ resource "google_project_iam_binding" "g-sbox-iam-owner" {
id = "g-prj-npe-sbox-looker-nwtest/roles/owner"
~ members = [
+ "group:npe-cloud-platformeng-contractors#c.mydomain.com",
- "group:npe-sbox-rw-tfetraining#c.mydomain.com",
]
# (3 unchanged attributes hidden)
}
Next time I do a terraform plan and apply, it shows the below. It then alternates between the two of the groups on each subsequent plan and apply.
# module.lookernwtest_project.google_project_iam_binding.g-sbox-iam-owner[1] will be updated in-place
~ resource "google_project_iam_binding" "g-sbox-iam-owner" {
id = "g-prj-npe-sbox-looker-nwtest/roles/owner"
~ members = [
- "group:npe-cloud-platformeng-contractors#c.relayhealth.com",
+ "group:npe-sbox-rw-tfetraining#c.relayhealth.com",
]
# (3 unchanged attributes hidden)
}
Tried to change the data structure from list to set and had the same issue.
The groups are not inherited and are applied only at the project level too. So not sure what I'm doing wrong here.
Instead of count you can use a for_each the change is simple...
the resource in your child module will look something like this:
resource "google_project_iam_binding" "g-sbox-iam-owner" {
for_each = var.owner_bindings == null ? toset([]) : toset(var.owner_bindings)
project = "${var.project_id}-${var.short_name}"
role = "roles/owner"
members = [each.value]
}
The count changes for_each and in the members we use the each.value
With a for_each the state changes, you will no longer see the numeric array:
# module.lookernwtest_project.google_project_iam_binding.g-sbox-iam-owner[0]
...
# module.lookernwtest_project.google_project_iam_binding.g-sbox-iam-owner[1]
instead it will have the names, something like:
# module.lookernwtest_project.google_project_iam_binding.g-sbox-iam-owner["abc"]
...
# module.lookernwtest_project.google_project_iam_binding.g-sbox-iam-owner["def"]
To loop or not to loop
After looking at this for a while; I'm questioning why do we need individual iam_binding if they all will have the same role, if all members have the same "roles/owner" we could just do:
resource "google_project_iam_binding" "g-sbox-iam-owner" {
project = "${var.project_id}-${var.short_name}"
role = "roles/owner"
members = [var.owner_bindings]
}

Adding Entities to Vault Namespaces,Groups, or Policies Terraform

I'm having an issue with the Vault Terraform. I am able to create Entities, Namespaces, Groups, and policies but linking them together is not happening for me. I can get the policy added to the group just fine, but adding members to that group I cannot.
Here's what I have so far:
# module.users returns vault_identity_entity.entity.id
data "vault_identity_entity" "user_lookup" {
for_each = toset([for user in local.groups : user.name])
entity_name = each.key
depends_on = [
module.users
]
}
# module.devops_namespace returns vault_namespace.namespace.path
resource "vault_identity_group" "devops" {
depends_on = [
vault_policy.policy
]
name = "devops_users"
namespace = module.devops_namespace.vault_namespace
member_entity_ids = [for user in data.vault_identity_entity.user_lookup : jsondecode(user.data_json).id]
}
resource "vault_identity_group_policies" "default" {
policies = [vault_policy.gitlab_policy.name]
exclusive = false
group_id = vault_identity_group.devops.id
}
What I need to do is create a namespace and add users and a policy to that namespace.
Any help would be appreciated, thanks!
resource "vault_policy" "namespace" {
depends_on = [module.namespace]
name = "namespace"
policy = file("policies/namespace.hcl")
namespace = "devops"
}
resource "vault_identity_group" "devops" {
depends_on = [
module.users
]
name = "devops_users"
namespace = module.devops_namespace.vault_namespace
policies = [vault_policy.gitlab_policy.name]
member_entity_ids = [for user in module.users : user.entity_id]
}
By referring the users the module created I was able to achieve the correct result.
Since the module created the users from locals and the data resource was trying to pull down the same users, the extra data resource section wasn't needed.
Thank you Marko E!

{ClientError}An error occurred (ValidationException) when calling the RunJobFlow operation: Invalid InstanceProfile

I deployed using Terraform an IAM Role to be used in EMR:
data "aws_iam_policy_document" "emr_assume_role" {
statement {
sid = "EMRAssume"
effect = "Allow"
actions = ["sts:AssumeRole"]
principals {
type = "Service"
identifiers = [
"elasticmapreduce.amazonaws.com"
]
}
}
}
resource "aws_iam_role" "my_emr_ec2_instance_role" {
name = "my_emr_ec2_instance_role"
assume_role_policy = data.aws_iam_policy_document.emr_assume_role.json
}
resource "aws_iam_policy" "emr_ec2_instances_policy" {
name = "emr_ec2_instances_policy"
policy = file("${path.module}/my/path/my_emr_instance_role_policy.json")
}
resource "aws_iam_role_policy_attachment" "policy_attachment" {
role = aws_iam_role.my_emr_ec2_instance_role.name
policy_arn = aws_iam_policy.emr_ec2_instances_policy.arn
}
Then when I try to run run_job_flow() method from boto3 like this:
client.run_job_flow(
Name="EMR",
LogUri=logs_uri,
ReleaseLabel='emr-6.2.0',
Instances=instances,
VisibleToAllUsers=True,
Steps=steps,
BootstrapActions=ba,
Applications=[{'Name': 'Spark'}],
ServiceRole='my_service_role_emr',
JobFlowRole='my_emr_ec2_instance_role',
Tags=tags)
But I straight-away receive the following error message:
{ClientError}An error occurred (ValidationException) when calling the RunJobFlow operation: Invalid InstanceProfile my_emr_ec2_instance_role
How to resolve?
I'm sharing my experience hoping to help someone else, please share yours if different.
In my case a first mistake was with the identifiers field, which should have had "ec2.amazonaws.com" as value, so the aws_iam_policy_document block would get:
data "aws_iam_policy_document" "emr_assume_role" {
statement {
sid = "EMRAssume"
effect = "Allow"
actions = ["sts:AssumeRole"]
principals {
type = "Service"
identifiers = [
"ec2.amazonaws.com"
]
}
}
}
Another issue is relative to the Instance Profile which would have been automatically created if the Role would have been generated from the AWS Console, but Terraform doesn't provide it automatically. So in Terraform this block of code should fix the problem:
resource "aws_iam_instance_profile" "emr_ec2_instance_profile" {
name = aws_iam_role.my_emr_ec2_instance_role.name
role = aws_iam_role.my_emr_ec2_instance_role.name
}

Whats the right way to create multiple elements using terraform variables?

I am creating AWS SQS queues using Terraform. for each service, i need to create two queues, one normal queue and one error queue. The settings for each are mostly the same, but i need to create the error queue first so i can pass its ARN to the normal queue as part of its redrive policy. Instead of creating 10 modules there has to be a better way to loop through replacing just the names. So programming logic... foreach queue in queue_prefixes, create error module, then regular module. Im sure im just not searching right or asking the right question.
sandbox/main.tf
provider "aws" {
region = "us-west-2"
}
module "hfd_sqs_error_sandbox" {
source = "../"
for_each = var.queue_prefixes
name= each.key+"_Error"
}
module "hfd_sqs_sandbox" {
source = "../"
name=hfd_sqs_error_sandbox.name
redrive_policy = jsonencode({
deadLetterTargetArn = hfd_sqs_error_sandbox_this_sqs_queue_arn,
maxReceiveCount = 3
})
}
variables.tf
variable "queue_prefixes" {
description = "Create these queues with the enviroment prefixed"
type = list(string)
default = [
"Clops",
"Document",
"Ledger",
"Log",
"Underwriting",
"Wallet",
]
}
You may want to consider adding a wrapper module that creates both Normal Queue and Dead-Letter Queue. That would make creating resources in order much easier.
Consider this example (with null resources for easy testing):
Root module creating all queues:
# ./main.tf
locals {
queue_prefixes = [
"Queue_Prefix_1",
"Queue_Prefix_2",
]
}
module queue_set {
source = "./modules/queue_set"
for_each = toset(local.queue_prefixes)
name = each.key
}
Wrapper module creating a set of 2 queues: normal + dlq:
# ./modules/queue_set/main.tf
variable "name" {
type = string
}
module dlq {
source = "../queue"
name = "${var.name}_Error"
}
module queue {
source = "../queue"
name = var.name
redrive_policy = module.dlq.id
}
Individual queue resource suitable to create both types of queues:
# ./modules/queue/main.tf
variable "name" {
type = string
}
variable "redrive_policy" {
type = string
default = ""
}
resource "null_resource" "queue" {
provisioner "local-exec" {
command = "echo \"Created queue ${var.name}, redrive policy: ${var.redrive_policy}\""
}
# this is irrelevant to the question, it's just to make null resource change every time
triggers = {
always_run = timestamp()
}
}
output "id" {
value = null_resource.queue.id
}
Now if we run this stack, we can see the resources created in the correct order:

terraform - Iterate over two linked resources

I’m trying to write some code which would take an input structure like this:
projects = {
"project1" = {
namespaces = ["mynamespace1"]
},
"project2" = {
namespaces = ["mynamespace2", "mynamespace3"]
}
}
and provision multiple resources with for_each which would result in this:
resource "rancher2_project" "project1" {
provider = rancher2.admin
cluster_id = module.k8s_cluster.cluster_id
wait_for_cluster = true
}
resource "rancher2_project" "project2" {
provider = rancher2.admin
cluster_id = module.k8s_cluster.cluster_id
wait_for_cluster = true
}
resource "rancher2_namespace" "mynamespace1" {
provider = rancher2.admin
project_id = rancher2_project.project1.id
depends_on = [rancher2_project.project1]
}
resource "rancher2_namespace" "mynamespace2" {
provider = rancher2.admin
project_id = rancher2_project.project2.id
depends_on = [rancher2_project.project2]
}
resource "rancher2_namespace" "mynamespace3" {
provider = rancher2.admin
project_id = rancher2_project.project2.id
depends_on = [rancher2_project.project2]
}
namespaces are dependent on Projects and the generate id needs to be passed into namespace.
Is there any good way of doing this dynamically ? We might have a lot of Projects/namespaces.
Thanks for any help and advise.
The typical answer for systematically generating multiple instances of a resource based on a data structure is resource for_each. The main requirement for resource for_each is to have a map which contains one element per resource instance you want to create.
In your case it seems like you need one rancher2_project per project and then one rancher2_namespace for each pair of project and namespaces. Your current data structure is therefore already sufficient for the rancher2_project resource:
resource "rancher2_project" "example" {
for_each = var.projects
provider = rancher2.admin
cluster_id = module.k8s_cluster.cluster_id
wait_for_cluster = true
}
The above will declare two resource instances with the following addresses:
rancher2_project.example["project1"]
rancher2_project.example["project2"]
You don't currently have a map that has one element per namespace, so it will take some more work to derive a suitable value from your input data structure. A common pattern for this situation is flattening nested structures for for_each using the flatten function:
locals {
project_namespaces = flatten([
for pk, proj in var.projects : [
for nsk in proj.namespaces : {
project_key = pk
namespace_key = ns
project_id = rancher2_project.example[pk].id
}
]
])
}
resource "rancher2_namespace" "example" {
for_each = {
for obj in local.project_namespaces :
"${obj.project_key}.${obj.namespace_key}" => obj
}
provider = rancher2.admin
project_id = each.value.project_id
}
This produces a list of objects representing all of the project and namespace pairs, and then the for_each argument transforms it into a map using compound keys that include both the project and namespace keys to ensure that they will all be unique. The resulting instances will therefore have the following addresses:
rancher2_namespace.example["project1.mynamespace1"]
rancher2_namespace.example["project2.mynamespace2"]
rancher2_namespace.example["project2.mynamespace3"]
This seems to work too:
resource "rancher2_namespace" "example" {
count = length(local.project_namespaces)
provider = rancher2.admin
name = local.project_namespaces[count.index].namespace_name
project_id = local.project_namespaces[count.index].project_id
}

Resources