Terraform Resource does not have attribute for variable - terraform

Running Terraform 0.11.7 and getting the following error:
module.frontend_cfg.var.web_acl: Resource 'data.terraform_remote_state.waf' does not have attribute 'waf_nonprod_id' for variable 'data.terraform_remote_state.waf.waf_nonprod_id'
Below is the terraform file:
module "frontend_cfg"
{
source = "../../../../modules/s3_fe/developers"
region = "us-east-1"
dev_shortname = "cfg"
web_acl = "${data.terraform_remote_state.waf.waf_nonprod_id}"
}
data "terraform_remote_state" "waf" {
backend = "local"
config = {
name = "../../../global/waf/terraform.tfstate"
}
}
The file which creates the tfstate file referenced above is below. This file has had no issues building.
resource "aws_waf_web_acl" "waf_fe_nonprod"
{
name = "fe_nonprod_waf"
metric_name = "fenonprodwaf"
default_action
{
type = "ALLOW"
}
}
output waf_nonprod_id
{
value = "${aws_waf_web_acl.waf_fe_nonprod.id}"
}
I will spare the full output of the cloudfront file, however, the following covers the text:
resource "aws_cloudfront_distribution" "fe_distribution"
{
web_acl_id = "${var.web_acl}"
}
If I put the ID of the waf ID into the web_acl variable, it works just fine, so I suspect the issue is something to do with the way I am calling data. This appears to match documentation though.

Use path instead of name in terraform_remote_state,
https://www.terraform.io/docs/backends/types/local.html
data "terraform_remote_state" "waf" {
backend = "local"
config = {
path = "../../../global/waf/terraform.tfstate"
}
}
or
data "terraform_remote_state" "waf" {
backend = "local"
config = {
path = "${path.module}/../../../global/waf/terraform.tfstate"
}
}
I tested it with terraform version 0.11.7 and 0.11.14
If you upgrade terraform to version 0.12.x, syntax using remote_state ouput has changed.
So change
web_acl = "${data.terraform_remote_state.waf.waf_nonprod_id}"
to
web_acl = data.terraform_remote_state.waf.outputs.waf_nonprod_id
or
web_acl = "${data.terraform_remote_state.waf.outputs.waf_nonprod_id}"

Related

How do I pass a data source value to a .tfvars file value?

I'm trying to create a secret on GCP's Secret Manager.
The secret value is coming from Vault (HCP Cloud).
How can I pass a value of the secret if I'm using a .tfvars file for the values?
Creating the secret without .tfvars works. Other suggestions rather than data source are welcomed as well. I saw that referring locals isn't possible as well inside tfvars.
vault.tf:
provider "vault" {
address = "https://testing-vault-public-vault-numbers.numbers.z1.hashicorp.cloud:8200"
token = "someToken"
}
data "vault_generic_secret" "secrets" {
path = "secrets/terraform/cloudcomposer/kafka/"
}
main.tf:
resource "google_secret_manager_secret" "connections" {
provider = google-beta
count = length(var.connections)
secret_id = "${var.secret_manager_prefix}-${var.connections[count.index].name}"
replication {
automatic = true
}
}
resource "google_secret_manager_secret_version" "connections-version" {
count = length(var.connections)
secret = google_secret_manager_secret.connections[count.index].id
secret_data = var.connections[count.index].uri
}
dev.tfvars:
image_version = "composer-2-airflow-2.1.4"
env_size = "LARGE"
env_name = "development"
region = "us-central1"
network = "development-main"
subnetwork = "development-subnet1"
secret_manager_prefix = "test"
connections = [
{ name = "postgres", uri = "postgresql://postgres_user:XXXXXXXXXXXX#1.1.1.1:5432/"}, ## This one works
{ name = "kafka", uri = "${data.vault_generic_secret.secrets.data["kafka_dev_password"]}"
]
Getting:
Error: Invalid expression
on ./tfvars/dev.tfvars line 39:
Expected the start of an expression, but found an invalid expression token.
Thanks in advance.
Values in the tfvars files have to be static, i.e., they cannot use any kind of a dynamic assignment like when using data sources. However, in that case, using local variables [1] should be a viable solution:
locals {
connections = [
{
name = "kafka",
uri = data.vault_generic_secret.secrets.data["kafka_dev_password"]
}
]
}
Then, in the resource you need to use it in:
resource "google_secret_manager_secret" "connections" {
provider = google-beta
count = length(local.connections)
secret_id = "${var.secret_manager_prefix}-${local.connections[count.index].name}"
replication {
automatic = true
}
}
resource "google_secret_manager_secret_version" "connections-version" {
count = length(local.connections)
secret = google_secret_manager_secret.connections[count.index].id
secret_data = local.connections[count.index].uri
}
[1] https://developer.hashicorp.com/terraform/language/values/locals

Terraform plan not working for a long time with AWS S3

I am using terraform to deploy the backend code to AWS. While configuring the terraform environments, I typed the terraform init. It works fine, however, the next command - terraform plan is not working for a long time. The command doesn't tell anything. I am waiting for a long time but can't see any message from cli. I would love to get helped from you developers.
Here is my main.tf code.
provider "aws" {
alias = "us_east_1"
region = "us-east-1"
default_tags {
tags = {
Owner = "Example Owner"
Project = "Example"
}
}
}
module "template_files" {
source = "hashicorp/dir/template"
base_dir = "react-app/build"
template_vars = {
vpc_id = "vpc-abc123123123"
}
}
resource "aws_s3_bucket" "test_tf_bucket" {
bucket = local.test_tf_creds.bucket
website {
index_document = "index.html"
}
tags = {
Bucket = "Example Terraform Bucket"
}
}
resource "aws_s3_bucket_object" "build_test_tf" {
for_each = module.template_files.files
bucket = local.test_tf_creds.bucket
key = each.key
content_type = each.value.content_type
source = each.value.source_path
content = each.value.content
etag = each.value.digests.md5
tags = {
Bucket-Object = "Example Bucket Object"
}
}
I would love you developers to help me solve this problem.

Terraform- GCP Data Proc Component Gateway Enable Issue

I’m trying to create data proc cluster in GCP using terraform resource google_dataproc_cluster. I would like to create Component gateway along with that. Upon seeing the documentation, it has been stated as to use the below snippet for creation:
cluster_config {
endpoint_config {
enable_http_port_access = "true"
}
}
Upon running the terraform plan, i see the error as " Error: Unsupported block type". And also tried using the override_properties and in the GCP data proc, i could see that the property is enabled, but still the Gateway Component is disabled. Wanted to understand, is there an issue upon calling the one given in the Terraform documentation and also is there an alternate for me to use it what?
software_config {
image_version = "${var.image_version}"
override_properties = {
"dataproc:dataproc.allow.zero.workers" = "true"
"dataproc:dataproc.enable_component_gateway" = "true"
}
}
The below is the error while running the terraform apply.
Error: Unsupported block type
on main.tf line 35, in resource "google_dataproc_cluster" "dataproc_cluster":
35: endpoint_config {
Blocks of type "endpoint_config" are not expected here.
RESOURCE BLOCK:
resource "google_dataproc_cluster" "dataproc_cluster" {
name = "${var.cluster_name}"
region = "${var.region}"
graceful_decommission_timeout = "120s"
labels = "${var.labels}"
cluster_config {
staging_bucket = "${var.staging_bucket}"
/*endpoint_config {
enable_http_port_access = "true"
}*/
software_config {
image_version = "${var.image_version}"
override_properties = {
"dataproc:dataproc.allow.zero.workers" = "true"
"dataproc:dataproc.enable_component_gateway" = "true" /* Has Been Added as part of Component Gateway Enabled which is already enabled in the endpoint_config*/
}
}
gce_cluster_config {
// network = "${var.network}"
subnetwork = "${var.subnetwork}"
zone = "${var.zone}"
//internal_ip_only = true
tags = "${var.network_tags}"
service_account_scopes = [
"cloud-platform"
]
}
master_config {
num_instances = "${var.master_num_instances}"
machine_type = "${var.master_machine_type}"
disk_config {
boot_disk_type = "${var.master_boot_disk_type}"
boot_disk_size_gb = "${var.master_boot_disk_size_gb}"
num_local_ssds = "${var.master_num_local_ssds}"
}
}
}
depends_on = [google_storage_bucket.dataproc_cluster_storage_bucket]
timeouts {
create = "30m"
delete = "30m"
}
}
Below is the snippet that worked for me to enable component gateway in GCP
provider "google-beta" {
project = "project_id"
}
resource "google_dataproc_cluster" "dataproc_cluster" {
name = "clustername"
provider = google-beta
region = us-east1
graceful_decommission_timeout = "120s"
cluster_config {
endpoint_config {
enable_http_port_access = "true"
}
}
This issue is discussed in this Git thread.
You can enable the component gateways in Cloud Dataproc by using google-beta provider in the Dataproc cluster and root configuration of terraform.
sample configuration:
# Terraform configuration goes here
provider "google-beta" {
project = "my-project"
}
resource "google_dataproc_cluster" "mycluster" {
provider = "google-beta"
name = "mycluster"
region = "us-central1"
graceful_decommission_timeout = "120s"
labels = {
foo = "bar"
}
...
...
}

Migrate terraform modules to updated provider format

I inherited a codebase with all providers stored inside modules and am having a lot of trouble moving the providers out so that I can remove the resources created from modules.
The current design violates the rules outlined here: https://www.terraform.io/docs/configuration/providers.html and makes removing modules impossible.
My understanding of the migration steps is:
Create a provider for use at the top-level
Update module resources to use providers stored outside of the module
Remove module (with top-level provider persisting)
Example module
An example /route53-alias-record/main.ts is:
variable "evaluate_target_health" {
default = true
}
data "terraform_remote_state" "env" {
backend = "s3"
config = {
bucket = "<bucket>"
key = "infra-${var.environment}-${var.target}.tfstate"
region = "<region>"
}
}
provider "aws" {
region = data.terraform_remote_state.env.outputs.region
allowed_account_ids = data.terraform_remote_state.env.outputs.allowed_accounts
assume_role {
role_arn = data.terraform_remote_state.env.outputs.aws_account_role
}
}
resource "aws_route53_record" "alias" {
zone_id = data.terraform_remote_state.env.outputs.public_zone_id
name = var.fqdn
type = "A"
alias {
name = var.alias_name
zone_id = var.zone_id
evaluate_target_health = var.evaluate_target_health
}
}
Starting usage
module "api-dns-alias" {
source = "../environment/infra/modules/route53-alias-record"
environment = "${var.environment}"
zone_id = "${module.service.lb_zone_id}"
alias_name = "${module.service.lb_dns_name}"
fqdn = "${var.environment}.example.com"
}
Provider overriding
## Same as inside module
provider "aws" {
region = data.terraform_remote_state.env.outputs.region
allowed_account_ids = data.terraform_remote_state.env.outputs.allowed_accounts
assume_role {
role_arn = data.terraform_remote_state.env.outputs.aws_account_role
}
}
module "api-dns-alias" {
source = "../environment/infra/modules/route53-alias-record"
environment = "${var.environment}"
zone_id = "${module.service.lb_zone_id}"
alias_name = "${module.service.lb_dns_name}"
fqdn = "${var.environment}.example.com"
providers = {
aws = aws ## <-- pass in explicitly
}
}
I was able to safely deploy with the providers set, but I do not believe that they are being used inside the module, which means the handshake still fails when I remove the module and the resources cannot be deleted.
I am looking for the steps needed to migrate to an outside provider so that I can safely remove resources.
I am currently working with terraform 0.12.24

Terraform modules azure event subscription optional fields

I am trying to work with terraform modules to create event subscription pointing to storage queue as an endpoint to it.
Below is the module
resource "azurerm_eventgrid_event_subscription" "events" {
name = var.name
scope = var.scope
subject_filter = var.subject_filter
storage_queue_endpoint = var.storage_queue_endpoint
}
and terraform is
module "storage_account__event_subscription" {
source = "../modules/event"
name = "testevent"
scope = test
subject_filter = {
subject_begins_with = "/blobServices/default/containers/test/blobs/in"
}
storage_queue_endpoint = {
storage_account_id = test
queue_name = test
}
}
Error message:
: subject_filter {
Blocks of type "subject_filter" are not expected here.
Error: Unsupported block type
on azure.tf line 90, in module "storage_account__event_subscription":
: storage_queue_endpoint {
Blocks of type "storage_queue_endpoint" are not expected here.
How do i parse the optional fields properly in terraform modules ?
In you module:
resource "azurerm_eventgrid_event_subscription" "events" {
name = var.name
scope = var.scope
subject_filter = {
subject_begins_with = var.subject_begins_with
}
storage_queue_endpoint = var.storage_queue_endpoint
}
Formatting is off here so make sure to run terraform fmt to account for my poor formatting. Also add the variable to the variables.tf file.
Your Terraform file:
module "storage_account__event_subscription" {
source = "../modules/event"
name = "testevent"
scope = test
subject_begins_with = "/blobServices/default/containers/test/blobs/in"
storage_queue_endpoint = {
storage_account_id = test
queue_name = test
}
}
You create the full structure in the module and then you assign the variables in the terraform file.
Anything that will have the same, or generally the same, value can have a default value set in the variables.tf as well so that you get smaller chunks in the TF file.

Resources