terraform_remote_state data block syntax - terraform

I'm working on an AWS multi-account setup with Terraform. I've got a master account that creates several sub-accounts, and in the sub-accounts I'm referencing the master's remote state to retrieve output values.
The terraform plan command is failing for this configuration in a test main.tf:
terraform {
required_version = ">= 0.12.0"
backend "s3" {
bucket = "bucketname"
key = "statekey.tfstate"
region = "us-east-1"
}
}
provider "aws" {
region = "us-east-1"
version = "~> 2.7"
}
data "aws_region" "current" {}
data "terraform_remote_state" "common" {
backend = "s3"
config {
bucket = "anotherbucket"
key = "master.tfstate"
}
}
With the following error:
➜ test terraform plan
Error: Unsupported block type
on main.tf line 20, in data "terraform_remote_state" "common":
20: config {
Blocks of type "config" are not expected here. Did you mean to define argument
"config"? If so, use the equals sign to assign it a value.
From what I can tell from the documentation, this should be working… what am I doing wrong?
➜ test terraform -v
Terraform v0.12.2
+ provider.aws v2.14.0

Seems the related document isn't updated after upgrade to 0.12.x
As the error prompt, add = after config
data "terraform_remote_state" "common" {
backend = "s3"
config = {
bucket = "anotherbucket"
key = "master.tfstate"
}
}
If the problem is fixed, recommend to raise a PR to update the document, then others can avoid the same issue again.

Related

Error: cannot decode dynamic from flatmap in terraform

While I was trying to run terraform plan using terraform0.13 on an old module which had previously run terraform apply using terraform0.11, I got the error:
Error: cannot decode dynamic from flatmap
The error does not indicate any specific line and was difficult to troubleshoot.
Part of my main.tf file:
provider "aws" {
region = var.region
version = "~> 3.57.0"
}
data "terraform_remote_state" "database" {
backend = "s3"
config = {
bucket = "my-s3-bucket-40370278403408"
region = var.region
key = "app/database/terraform.tfstate"
}
}
resource "aws_security_group_rule" "allow_mysql_from_server" {
type = "ingress"
protocol = -1
from_port = 3306
to_port = 3306
source_security_group_id = aws_security_group.ecs_fargate.id
security_group_id =
data.terraform_remote_state.database.outputs.rds["security_group"]
}
.....
Root Cause:
I was using remote states from dependent modules(database) in the above module as shown in the main.tf file. Though my old module had the remote states of dependent modules(database) in terraform0.11, I had actually run terraform apply on those dependent modules and their remote states are now actually in terraform0.13. Therefore, terraform cannot compare those remote states.
Fix:
I found the fix here
I needed to remove remote states using
terraform state rm data.terraform_remote_state.database
Then I could run terraform plan without errors!

Error: Inconsistent dependency lock file when extract resource into a module

I am new to terraform and as I extracted one of the resources into a module I got this:
Error: Inconsistent dependency lock file
│
│ The following dependency selections recorded in the lock file are inconsistent with the current
│ configuration:
│ - provider registry.terraform.io/hashicorp/heroku: required by this configuration but no version is selected
│
│ To update the locked dependency selections to match a changed configuration, run:
│ terraform init -upgrade
How I did?
First I had this:
provider "heroku" {}
resource "heroku_app" "example" {
name = "learn-terraform-heroku-ob"
region = "us"
}
resource "heroku_addon" "redis" {
app = heroku_app.example.id
plan = "rediscloud:30"
}
after that terraform init was runing without error also terraform plan was successfull.
Then I extracted the redis resource declaration into a module:
provider "heroku" {}
resource "heroku_app" "example" {
name = "learn-terraform-heroku-ob"
region = "us"
}
module "key-value-store" {
source = "./modules/key-value-store"
app = heroku_app.example.id
plan = "30"
}
And the content of modules/key-value-store/main.tf is this:
terraform {
required_providers {
mycloud = {
source = "heroku/heroku"
version = "~> 4.6"
}
}
}
resource "heroku_addon" "redis" {
app = var.app
plan = "rediscloud:${var.plan}"
}
terraform get went well. but terraform plan showed me the above error!
For this code to work, you have to have the required_providers blocks in both the root and child modules. So, the following needs to happen:
Add the required_providers block to the root module (this is what you have already)
Add the required_providers block to the child module and name it properly (currently you have set it to mycloud, and provider "heroku" {} block is missing)
The code that needs to be added in the root module is:
terraform {
required_providers {
heroku = {
source = "heroku/heroku"
version = "~> 4.6"
}
}
}
provider "heroku" {}
resource "heroku_app" "example" {
name = "learn-terraform-heroku-ob"
region = "us"
}
module "key-value-store" {
source = "./modules/key-value-store"
app = heroku_app.example.id
plan = "30"
}
In the child module (i.e., ./modules/key-value-store) the following needs to be present:
terraform {
required_providers {
heroku = { ### not mycloud
source = "heroku/heroku"
version = "~> 4.6"
}
}
}
provider "heroku" {} ### this was missing as well
resource "heroku_addon" "redis" {
app = var.app
plan = "rediscloud:${var.plan}"
}
This stopped working when the second resource was moved to the module as Heroku is not an official Terraform provider hence the provider settings are not propagated to the modules. For the unofficial providers (e.g., marked with verified), corresponding blocks of required_providers and provider <name> {} have to be defined. Also, make sure to remove the .terraform directory and re-run terraform init.

Terraform assigning outputs to variable or pulling output from stateful and using it

I am working with terraform and trying to output the security group ID in the form of an output and pull it from the local terraform state file and use that information in a different resource in my case it would be a aws_eks_cluster in the vpc_config session.
In the module that has the security group:
output "security_group_id" {
value = aws_security_group.a_group.id
}
In the module that reads the output (the backend config dependends on which backend type you are using and how it is configured):
data "terraform_remote_state" "security_group" {
backend = "s3"
config {
bucket = "your-terraform-state-files"
key = "your-state-file-key.tfstate"
region = "us-east-1"
}
}
locals {
the_security_group_id = data.terraform_remote_state.security_group.outputs.security_group_id
}

Terraform: Set an AWS Resource's provider value via a module variable

I have created a module I want to use across multiple providers (just two AWS providers for 2 regions). How can I set a resource's provider value via variable from a calling module? I am calling a module codebuild.tf (which I want to be region agnostic) from a MGMT module named cicd.tf - Folder structure:
main.tf
/MGMT/
-> cicd.tf
/modules/codebuild/
-> codebuild.tf
main.tf:
terraform {
required_version = ">= 1.0.10"
backend "s3" {
}
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 3.0"
}
}
}
# default AWS provider for MGMT resources in us-east-1 and global
provider "aws" {
region = "us-east-1"
}
# DEV Account resources in us-east-1 and global
provider "aws" {
region = "us-east-1"
assume_role {
role_arn = "arn:aws:iam::accountid:role/dev-rolename"
}
alias = "dev_us-east-1"
}
# DEV Account resources in us-west-2 and global
provider "aws" {
region = "us-west-2"
assume_role {
role_arn = "arn:aws:iam::accountid:role/dev-rolename"
}
alias = "dev_us-west-2"
}
module "MGMT" {
source = "./MGMT"
count = var.aws_env == "MGMT" ? 1 : 0
aws_env = var.aws_env
}
When I build my TF, its under the MGMT AWS account which uses the the default aws provider that doesn't have an alias - I am then trying to set a provider with an AWS IAM Role (that's cross account) when I am calling the module (I made the resource a module because I want to run it in multiple regions):
/MGMT/cicd.tf:
# DEV in cicd.tf
# create the codebuild resource in the assumed role's us-east-1 region
module "dev_cicd_iac_us_east_1" {
source = "../modules/codebuild/"
input_aws_provider = "aws.dev_us-east-1"
input_aws_env = var.dev_aws_env
}
# create the codebuild resource in the assumed role's us-west-2 region
module "dev_cicd_iac_us_west_2" {
source = "../modules/codebuild/"
input_aws_provider = "aws.dev_us-west_2"
input_aws_env = var.dev_aws_env
}
/modules/codebuild/codebuild.tf:
# Code Build resource here
variable "input_aws_provider" {}
variable "input_aws_env" {}
resource "aws_codebuild_project" "codebuild-iac" {
provider = tostring(var.input_aws_provider) # trying to make it a string, with just the var there it looks for a var provider
name = "${var.input_aws_env}-CodeBuild-IaC"
# etc...
}
I get the following error when I plan the above:
│ Error: Invalid provider reference
│ On modules/codebuild/codebuild.tf line 25: Provider argument requires
│ a provider name followed by an optional alias, like "aws.foo".
How can I make the provider value a proper reference to the aws provider defined in main.tf while still using a MGMT folder/module file named cicd.tf?

Terraform s3 backend vs terraform_remote_state

According to the documentation, to use s3 and not a local terraform.tfstate file for state storage, one should configure a backend more or less as follows:
terraform {
backend "s3" {
bucket = "my-bucket-name"
key = "my-key-name"
region = "my-region"
}
}
I was
using a local (terraform.tfstate) file
added the above snippet in my provided.tf file
run (again) terraform init
was asked by terraform to migrate my state to the above bucket
...so far so good...
But then comes this confusing part about terraform_remote_state ...
Why do I need this?
Isn't my state now saved remotely (on the aforemenetioned s3 bucket) already?
terraform_remote_state isn't for storage of your state its for retrieval in another terraform plan if you have outputs. It is a data source. For example if you output your Elastic IP Address in one state:
resource "aws_eip" "default" {
vpc = true
}
output "eip_id" {
value = "${aws_eip.default.id}"
}
Then wanted to retrieve that in another state:
data "terraform_remote_state" "remote" {
backend = "s3"
config {
bucket = "my-bucket-name"
key = "my-key-name"
region = "my-region"
}
}
resource "aws_instance" "foo" {
...
}
resource "aws_eip_association" "eip_assoc" {
instance_id = "${aws_instance.foo.id}"
allocation_id = "${data.terraform_remote_state.remote.eip_id}"
}
edit: If you are retrieving outputs in Terraform > 0.12 you need to include outputs
data "terraform_remote_state" "remote" {
backend = "s3"
config {
bucket = "my-bucket-name"
key = "my-key-name"
region = "my-region"
}
}
resource "aws_instance" "foo" {
...
}
resource "aws_eip_association" "eip_assoc" {
instance_id = "${aws_instance.foo.id}"
allocation_id = "${data.terraform_remote_state.remote.outputs.eip_id}"
}
Remote State allows you to collaborate with other team members, and central location to store your infrastructure state.
Apart from that by enabling s3 versioning, you can have versioning for state file, to track changes.

Resources