I'm getting the following error running Terraform apply when trying to reference an output from another Module:
│ Error: Unsupported attribute
│
│ on ../app-db-modules/rds.tf line 14, in resource "aws_db_subnet_group" "ccDBSubnetGroup":
│ 14: subnet_ids = ["${data.terraform_remote_state.remote.outputs.ccPrivateSubnetId}"]
│ ├────────────────
│ │ data.terraform_remote_state.remote.outputs is object with no attributes
│
│ This object does not have an attribute named "ccPrivateSubnetId".
╵
Tree
.
├── app
│ ├── main.tf
│ ├── terraform.tfstate
│ └── terraform.tfstate.backup
├── app-db-modules
│ ├── main.tf
│ ├── rds.tf
│ └── variables.tf
├── app-network-modules
│ ├── main.tf
│ ├── outputs.tf
│ ├── variables.tf
│ └── vpc.tf
└── app-tf-state-infra-modules
├── main.tf
├── tf-state-infra.tf
└── variables.tf
main.tf (app dir)
terraform {
backend "s3" {
bucket = "MY_BUCKET_NAME"
key = "tf-infra/terraform.tfstate"
region = "us-east-1"
dynamodb_table = "terraform-state-locking"
encrypt = true
}
required_providers {
aws = {
source = "hashicorp/aws"
version = "~>3.0"
}
}
}
provider "aws" {
region = "us-east-1"
}
...
module "vpc-infra" {
source = "../app-network-modules"
# VPC Input Vars
vpc_cidr = "10.0.0.0/16"
public_subnet_cidr = "10.0.0.0/24"
private_subnet_cidr = "10.0.1.0/24"
}
module "rds-infra" {
source = "../app-db-modules"
# RDS Input Vars
db_az = "us-east-1a"
db_name = "ccDatabaseInstance"
db_user_name = var.db_user_name
db_user_password = var.db_user_password
}
vpc.tf (app-network-modules)
...
resource "aws_subnet" "ccPrivateSubnet" {
vpc_id = aws_vpc.ccVPC.id
cidr_block = var.private_subnet_cidr
}
...
outputs.tf (app-network-modules)
output "ccPrivateSubnetId" {
description = "Will be used by rds Module to set subnet_ids"
value = aws_subnet.ccPrivateSubnet.id
}
The following {data.terraform_remote_state.remote.outputs.ccPrivateSubnetId} is causing the error:
rds.tf (app-db-modules)
data "terraform_remote_state" "remote" {
backend = "s3"
config = {
bucket = "MY_BUCKET_NAME"
key = "tf-infra/terraform.tfstate"
region = "us-east-1"
}
}
resource "aws_db_subnet_group" "ccDBSubnetGroup" {
subnet_ids = ["${data.terraform_remote_state.remote.outputs.ccPrivateSubnetId}"]
}
resource "aws_db_instance" "ccDatabaseInstance" {
db_subnet_group_name = "ccDBSubnetGroup"
availability_zone = var.db_az
allocated_storage = 20
storage_type = "standard"
engine = "postgres"
engine_version = "12.5"
instance_class = "db.t2.micro"
name = var.db_name
username = var.db_user_name
password = var.db_user_password
skip_final_snapshot = true
}
output "all_outputs" {
value = data.terraform_remote_state.remote.outputs
}
Any thoughts on why data.terraform_remote_state.remote.outputs is object with no attributes and/or why I'm unable to reference the ccPrivateSubnetId in rds.tf which was provided as output from another Module (vpc.tf) would be appreciated!
EDITing to provide solution based on comments provided below.
main.tf
...
module "vpc-infra" {
source = "../app-network-modules"
# VPC Input Vars
vpc_cidr = "10.0.0.0/16"
public_subnet_cidr = "10.0.0.0/24"
private_subnet_cidr = "10.0.1.0/24"
}
module "rds-infra" {
source = "../app-db-modules"
# RDS Input Vars
ccPrivateSubnetId = "${module.vpc-infra.ccPrivateSubnetId}"
db_az = "us-east-1a"
db_name = "ccDatabaseInstance"
db_user_name = var.db_user_name
db_user_password = var.db_user_password
}
...
outputs.tf
output "ccPrivateSubnetId" {
description = "Will be used by RDS Module to set subnet_ids"
value = "${aws_subnet.ccPrivateSubnet.id}"
}
vpc.tf
...
resource "aws_subnet" "ccPrivateSubnet" {
vpc_id = aws_vpc.ccVPC.id
cidr_block = var.private_subnet_cidr
}
...
rds.tf
resource "aws_db_subnet_group" "ccDBSubnetGroup" {
subnet_ids = ["${var.ccPrivateSubnetId}"]
}
...
Using data.terraform_remote_state in general is bad practice. I would call it a very advanced feature that should only be used in extreme edge cases. If you are referencing the same state that the current Terraform template is using, then it is an absolute anti-pattern.
Instead, make the values you need to reference part of the outputs of the network module, then pass those values as inputs to the DB module.
Related
I have the following file structure:
.
├── terragrunt.hcl
└── root.hcl /
└── env.hcl /
└── workload.hcl/
└── terragrunt.hcl
within each config (hcl file), there are tags such as:
root
locals {
tags = {
root= "test"
}
}
env
locals {
tags = {
env = "test"
}
}
workload
locals {
tags = {
workload = "test"
}
}
These tags are merged in the child terragrunt.hcl using the following.
locals {
root_vars = (read_terragrunt_config(find_in_parent_folders("root.hcl"))).locals
env_vars = (read_terragrunt_config(find_in_parent_folders("env.hcl"))).locals
workload_vars = (read_terragrunt_config(find_in_parent_folders("workload.hcl"))).locals
merged_tags = merge(local.root_vars.tags, local.env_vars.tags, local.workload_vars.tags)
}
inputs = {
tags = local.merged_tags
}
The subsequent terragrunt plans all fail with the following error.
│ Error: Incorrect attribute value type
│
│ on main.tf line 46, in resource "azurerm_windows_virtual_machine" "vm":
│ 46: tags = var.tags
│ ├────────────────
│ │ var.tags is "{\"root\":\"test\",\"env\":\"test\",\"workload\":\"test\"}"
│
│ Inappropriate value for attribute "tags": map of string required.
Anyone know what i am doing wrong here?
terragrunt plan = expected addition of tags to resources
I have the below directory structure
├── main.tf
├── output.tf
├── variables.tf
├── modules
│ ├── ServicePrincipal
│ │ ├── variables.tf
│ │ ├── outputs.tf
│ │ ├── main.tf
│ ├── aks
│ │ ├── main.tf
│ │ ├── output.tf
│ │ └── variables.tf
...
Issue:
I want to use client_id and client_secret generated from service principal module as an input to create my aks cluster. I am able to reference the below output variables from my root main.tf by module.modulename.outputvarname however, I cannot access it in another child module(aks) as var.client_id or module.serviceprincipal.client_id
main.tf of root module where I am able to use client_id and client_secret
module "ServicePrincipal" {
source = "./modules/ServicePrincipal"
service_principal_name = var.service_principal_name
redirect_uris = var.redirect_uris
}
module "aks" {
source = "./modules/aks/"
service_principal_name = var.service_principal_name
serviceprinciple_id = module.ServicePrincipal.service_principal_object_id
serviceprinciple_key = module.ServicePrincipal.client_secret
location = var.location
resource_group_name = var.rgname
depends_on = [
module.ServicePrincipal
]
}
main.tf of aks module
service_principal {
client_id = var.client_id
client_secret = var.client_secret
}
output.tf for my ServicePrincipal module
output "client_id" {
description = "The application id of AzureAD application created."
value = azuread_application.main.application_id
}
output "client_secret" {
description = "Password for service principal."
value = azuread_service_principal_password.main.*.value
}
Below is the error I am getting:
Error: Missing required argument
on main.tf line 136, in module "aks":
136: module "aks" {
The argument "client_id" is required, but no definition was found.
Error: Missing required argument
on main.tf line 136, in module "aks":
136: module "aks" {
The argument "client_secret" is required, but no definition was found.
I already defined those as variables in aks module and root module, am I missing something here?
Thanks in advance!
Piyush
Child modules can't reference each others outputs. You have to explicitly pass them in the root module from one module to the second, e.g.
in root:
module "ServicePrincipal" {
}
module "aks" {
client_id = module.ServicePrincipal.client_id
}
You're using output name as client_id and client_secret but in module you're calling with different names ?
module.ServicePrincipal.service_principal_object_id
Still fairly new with programming, and diving into the deep end of terraform. I am working through learning ACI with Terraform and getting a error. I think i am just declaring my objects wrong, but can't figure it out. i have tried to leave out a lot of redundant config and errors for brevity.
│ Error: Invalid index
│
│ on ../modules/tenant/contracts.tf line 38, in resource "aci_epg_to_contract" "terraform_epgweb_contract":
│ 38: application_epg_dn = aci_application_epg.epga[each.key].id
│ ├────────────────
│ │ aci_application_epg.epga is object with 2 attributes
│ │ each.key is "terraform_three"
│
│ The given key does not identify an element in this collection value.
Problem:
Assign 3 contracts, to each EPG (2 epgs in each App profile).
For App A that I created, I have epga which consists of a Web_EPG and DB EPG.
For App B that I created, I have epgb which consists of a Web_EPG and DB EPG.
module "my_tenant" {
source = "../modules/tenant"
epga = {
web_epg = {
name = "web_epg"
application_profile = "app_a_ap"
bridge_domain = "app_a_bd"
},
db_epg = {
name = "db_epg"
application_profile = "app_a_ap"
bridge_domain = "app_a_bd"
}
}
For the contracts I am using two different epg contracts - epga_contracts and epgb_contracts.
epga_contracts = {
terraform_one = {
epg = "web_epg",
contract = "contract_sql",
contract_type = "consumer"
},
terraform_two = {
epg = "db_epg",
contract = "contract_sql",
contract_type = "provider"
},
terraform_three = {
epg = "web_epg",
contract = "contract_web",
contract_type = "provider"
}
}
to assign them i am trying to loop through both of the web contracts and both of the db contracts that are already created to assign them to the EPG's
resource "aci_epg_to_contract" "terraform_epgweb_contract" {
for_each = var.epga_contracts
application_epg_dn = aci_application_epg.epga[each.key].id
contract_dn = aci_contract.terraform_contract[each.value.contract].id
contract_type = each.value.contract_type
}
resource "aci_epg_to_contract" "terraform_epgdb_contract" {
for_each = var.epga_contracts
application_epg_dn = aci_application_epg.epgb[each.key].id
contract_dn = aci_contract.terraform_contract[each.value.contract].id
contract_type = each.value.contract_type
}
lastly here is the map for the contract.
variable "epgb_contracts" {
type = map(object({
epg = string,
contract = string,
contract_type = string,
}))
description = "Map of filters to create and their associated subjects"
default = {}
}
Defintion of epga
resource "aci_application_epg" "epga" {
for_each = local.epga
name = each.value.name
application_profile_dn = aci_application_profile.app_prof[each.value.application_profile].id
relation_fv_rs_bd = aci_bridge_domain.bd[each.value.bridge_domain].id
depends_on = [aci_bridge_domain.bd, aci_application_profile.app_prof]
}
here is the directory tree
/Users/jasonholt/terraform/mod_examples
├── aci_tenant
│ ├── credentials.tf
│ ├── main.tf
│ ├── terraform.tfstate
│ ├── terraform.tfstate.backup
│ └── variables.old
└── modules
└── tenant
├── app_profile.tf
├── bd.tf
├── contracts.tf
├── epg.tf
├── locals.tf
├── tenant.tf
├── variables.tf
└── vrf.tf
I have a hierarchy like this in my Terraform Cloud git project:
├── aws
│ ├── flavors
│ │ └── main.tf
│ ├── main.tf
│ ├── security-rules
│ │ └── sec-rule1
│ │ └── main.tf
│ └── vms
│ │ └── vm1
│ │ └── main.tf
└── main.tf
All main main.tf files contain module definitions with child folders:
/main.tf:
terraform {
required_version = "~> 0.12.0"
backend "remote" {
hostname = "app.terraform.io"
organization = "foo"
workspaces {
name = "bar"
}
}
required_providers {
openstack = "~> 1.24.0"
}
}
module "aws" {
source = "./aws"
}
/aws/main.tf:
module "security-rules" {
source = "./security-rules"
}
module "flavors" {
source = "./flavors"
}
module "vms" {
source = "./vms"
}
/aws/security-rules/main-tf:
module "sec-rule1" {
source = "./sec-rule1"
}
/aws/vms/main-tf:
module "vm1" {
source = "./vm1"
}
Then I have this security rule defined.
/aws/security-rules/sec-rule1/main-tf:
resource "openstack_compute_secgroup_v2" "sec-rule1" {
name = "sec-rule1"
description = "Allow web port"
rule {
from_port = 80
to_port = 80
ip_protocol = "tcp"
cidr = "0.0.0.0/0"
}
lifecycle {
prevent_destroy = false
}
}
And I want to reference it from one or more VMs, but I don't know how to reference by resource ID (or name). I use plain names instead of reference.
/aws/vms/vm1/main-tf:
resource "openstack_blockstorage_volume_v3" "vm1_volume" {
name = "vm1_volume"
size = 30
image_id = "foo-bar"
}
resource "openstack_compute_instance_v2" "vm1_instance" {
name = "vm1_instance"
flavor_name = "foo-bar"
key_pair = "foo-bar keypair"
image_name = "Ubuntu Server 18.04 LTS Bionic"
block_device {
uuid = "${openstack_blockstorage_volume_v3.vm1_volume.id}"
source_type = "volume"
destination_type = "volume"
boot_index = 0
delete_on_termination = false
}
network {
name = "SEG-tenant-net"
}
security_groups = ["default", "sec-rule1"]
config_drive = true
}
resource "openstack_networking_floatingip_v2" "vm1_fip" {
pool = "foo-bar"
}
resource "openstack_compute_floatingip_associate_v2" "vm1_fip" {
floating_ip = "${openstack_networking_floatingip_v2.vm1_fip.address}"
instance_id = "${openstack_compute_instance_v2.vm1_instance.id}"
}
I want to use security-rules (and more stuff) referencing by name or ID, because it would be more consistent. Besides when I create a new security rule and, at the same time, a VM, Terraform OpenStack provider plans it without error, but when applying it, an error is produced because VM is created first and it doesn't find not-yet created new security rule.
How can I do this?
You should make an output of sec_rule_allow_web_name for sec-rule1 and security-rules/ modules, then set the output of the security-rules/ module as an input of the vm1 and vms modules. This way you can keep a dependency of the vm1 module with the output of security_rules which is called Dependency Inversion.
# ./security-rules/<example>/outputs.tf
output "sec_rule_allow_web_name" {
value = "<some-resource-to-output>"
}
# ./vms/variables.tf
variable "security_rule_name" {}
Provided the outputs and inputs are defined in the correct modules.
# /aws/main.tf
# best practice to use underscores instead of dashes in names
# so security-roles/ directory is now called security_rules
module "security_rules" {
source = "./security-rules"
}
module "flavors" {
source = "./flavors"
}
module "vms" {
source = "./vms"
security_rule_name = module.security_rules.sec_rule_allow_web_name
}
I've begun working with Terraform and have totally bought into it - amazing! having created my entire Dev environment in terms of AWS VPC, subnets, NACLs, SGs, Route tables etc etc etc, I have decided that I had better turn this into reusable modules.
So now I have turned it into modules, with variables etc. Now my dev template simply takes variables and uses them as inputs to the module. I end up with this:
terraform {
backend "s3" {
bucket = "redacted"
key = "dev/vpc/terraform.tfstate"
region = "eu-west-1"
encrypt = true
dynamodb_table = "terraform_statelock_redacted"
}
}
provider "aws"{
access_key = ""
secret_key = ""
region = "eu-west-1"
}
module "base_vpc" {
source = "git#github.com:closed/terraform-modules.git//vpc"
vpc_cidr = "${var.vpc_cidr}"
vpc_region = "${var.vpc_region}"
Environment = "${var.Environment}"
Public-subnet-1a = "${var.Public-subnet-1a}"
Public-subnet-1b = "${var.Public-subnet-1b}"
Private-subnet-1a = "${var.Private-subnet-1a}"
Private-subnet-1b = "${var.Private-subnet-1b}"
Private-db-subnet-1a = "${var.Private-db-subnet-1a}"
Private-db-subnet-1b = "${var.Private-db-subnet-1b}"
Onsite-computers = "${var.Onsite-computers}"
browse_access = "${var.browse_access}"
}
Now I have all state managed in an s3 backend, as you can see in the above configuration. I also have other state files for services/instances that are running. My problem is that now that I have turned this into a module and referenced it as above, it wants to blow away my state! I was under the impression that it would import the module and run it whilst respecting other configuration. The actual module code was copied from the original template, so nothing has changed there.
Is there a reason it is trying to blow everything away and start again? How does one manage separate states per environment in the case of using modules? I get no other errors. I have devs working on some of the servers at the moment so I'm paralysed now ha!
I guess I've misunderstood something, any help much appreciated :)
Thanks.
Edit - Using Terraform 0.9.8
OK so I think the bit I misunderstood was the way that using modules changes the paths in the state file. I realised I was on the right line while reading the Terraform docs around state migration.
I found this great blog entry to assist me with getting my head around it:
https://ryaneschinger.com/blog/terraform-state-move/
No comment section for me to thank that guy! Anyway, after seeing how easy it was, I just output the terraform state list command to a text file for the main file. Used PowerShell to quickly iterate over these and write up the commands for the module I was moving them into. When I tried to execute the lines with this script I got the "Terraform has crashed!!!!" error, so just cut and paste them into my shell one by one. Proper nooby, but there was only 50 or so resources, so not that time consuming. SO glad I'm doing this at Dev stage rather than deciding to do it retrospectively in prod.
So I'm sorted. Thanks for your input though JBirdVegas.
We ran into this issue and decided each environment needed a code representation we could view at the same time as the others if needed, ie compare dev configs to qa.
So now we have a folder for dev and one for qa and we launch terraform from there. Each is basically a list of variables that calls modules for each component.
Here is my tree for a visual representation
$ tree terraform/
terraform/
├── api_gateway
│ ├── main.tf
│ ├── output.tf
│ └── variables.tf
├── database
│ ├── main.tf
│ ├── output.tf
│ └── variables.tf
├── dev
│ └── main.tf
├── ec2
│ ├── main.tf
│ ├── output.tf
│ └── variables.tf
├── kms
│ ├── main.tf
│ ├── output.tf
│ └── variables.tf
├── network
│ ├── main.tf
│ ├── output.tf
│ └── variables.tf
├── qa
│ └── main.tf
└── sns
├── output.tf
├── main.tf
└── variables.tf
dev/main.tf and qa/main.tf import the modules provided by the other folders supplying environment specific configurations for each module.
EDIT: here is a sanitized version of my dev/main.tf
provider "aws" {
region = "us-east-1"
profile = "blah-dev"
shared_credentials_file = "${pathexpand("~/.aws/credentials")}"
}
terraform {
backend "s3" {
bucket = "sweet-dev-bucket"
key = "sweet/dev.terraform.tfstate"
region = "us-east-1"
profile = "blah-dev"
}
}
variable "aws_account" {
default = "000000000000"
}
variable "env" {
default = "dev"
}
variable "aws_region" {
default = "us-east-1"
}
variable "tag_product" {
default = "sweet"
}
variable "tag_business_region" {
default = "east"
}
variable "tag_business_unit" {
default = "my-department"
}
variable "tag_client" {
default = "some-client"
}
module build_env {
source = "../datasources"
}
module "kms" {
source = "../kms"
tag_client = "${var.tag_client}"
tag_business_region = "${var.tag_business_region}"
tag_business_unit = "${var.tag_business_unit}"
tag_product = "${var.tag_product}"
}
module "network" {
source = "../network"
vpc_id = "vpc-000a0000"
subnet_external_1B = "subnet-000a0000"
subnet_external_1D = "subnet-000a0001"
subnet_db_1A = "subnet-000a0002"
subnet_db_1B = "subnet-000a0003"
}
module "database" {
source = "../database"
env = "dev"
vpc_id = "${module.network.vpc_id}"
subnet_external_1B = "${module.network.subnet_external_1B}"
subnet_external_1D = "${module.network.subnet_external_1D}"
subnet_db_1A = "${module.network.subnet_db_1A}"
subnet_db_1B = "${module.network.subnet_db_1B}"
database_instance_size = "db.t2.small"
database_name = "my-${var.tag_product}-db"
database_user_name = "${var.tag_product}"
database_passwd = "${module.kms.passwd_plaintext}"
database_identifier = "${var.tag_product}-rds-database"
database_max_connections = "150"
}
module sns {
source = "../sns"
aws_account = "${var.aws_account}"
}
module "api_gateway" {
source = "../api_gateway"
env = "${var.env}"
vpc_id = "${module.network.vpc_id}"
domain_name = "${var.tag_product}-dev.example.com"
dev_certificate_arn = "arn:aws:acm:${var.aws_region}:${var.aws_account}:certificate/abcd0000-a000-a000-a000-1234567890ab"
aws_account = "${var.aws_account}"
aws_region = "${var.aws_region}"
tag_client = "${var.tag_client}"
tag_business_unit = "${var.tag_business_unit}"
tag_product = "${var.tag_product}"
tag_business_region = "${var.tag_business_region}"
autoscaling_events_sns_topic_arn = "${module.sns.sns_topic_arn}"
db_subnet_id_1 = "${module.network.subnet_db_1A}"
db_subnet_id_2 = "${module.network.subnet_db_1B}"
ec2_role = "${var.tag_product}-assume-iam-role"
kms_key_arn = "${module.kms.kms_arn}"
passwd_cypher_text = "${module.kms.passwd_cyphertext}"
}
module "ec2" {
source = "../ec2"
s3_bucket = "${var.tag_product}_dev_bucket"
aws_region = "${var.aws_region}"
env = "${var.env}"
ec2_key_name = "my-${var.tag_product}-key"
ec2_instance_type = "t2.micro"
aws_account = "${var.aws_account}"
vpc_id = "${module.network.vpc_id}"
binary_path = "${module.build_env.binary_path}"
binary_hash = "${module.build_env.binary_hash}"
git_hash_short = "${module.build_env.git_hash_short}"
private_key = "${format("%s/keys/%s-%s.pem", path.root, var.tag_product, var.env)}"
cloudfront_domain = "${module.api_gateway.cloudfront_domain}"
api_gateway_domain = "${module.api_gateway.api_gateway_cname}"
tag_client = "${var.tag_client}"
tag_business_region = "${var.tag_business_region}"
tag_product = "${var.tag_product}"
tag_business_unit = "${var.tag_business_unit}"
auto_scale_desired_capacity = "1"
auto_scale_max = "2"
auto_scale_min = "1"
autoscaling_events_sns_topic = "${module.sns.sns_topic_arn}"
subnet_external_b = "${module.network.subnet_external_b}"
subnet_external_a = "${module.network.subnet_external_a}"
kms_key_arn = "${module.kms.kms_arn}"
passwd_cypher_text = "${module.kms.passwd_cyphertext}"
}
Then my QA is basically the same (modify a couple vars at the top) however the most important difference is the very top of the qa/main.tf Mostly these vars:
provider "aws" {
region = "us-east-1"
profile = "blah-qa"
shared_credentials_file = "${pathexpand("~/.aws/credentials")}"
}
terraform {
backend "s3" {
bucket = "sweet-qa-bucket"
key = "sweet/qa.terraform.tfstate"
region = "us-east-1"
profile = "blah-qa"
}
}
variable "aws_account" {
default = "000000000001"
}
variable "env" {
default = "qa"
}
Using this our backend for dev and qa have different state files in different buckets in different aws accounts. Idk what your requirements are but this has satisfied most projects I've worked with, in fact we are expanding our usage of this model across my org.