While I was trying to run terraform plan using terraform0.13 on an old module which had previously run terraform apply using terraform0.11, I got the error:
Error: cannot decode dynamic from flatmap
The error does not indicate any specific line and was difficult to troubleshoot.
Part of my main.tf file:
provider "aws" {
region = var.region
version = "~> 3.57.0"
}
data "terraform_remote_state" "database" {
backend = "s3"
config = {
bucket = "my-s3-bucket-40370278403408"
region = var.region
key = "app/database/terraform.tfstate"
}
}
resource "aws_security_group_rule" "allow_mysql_from_server" {
type = "ingress"
protocol = -1
from_port = 3306
to_port = 3306
source_security_group_id = aws_security_group.ecs_fargate.id
security_group_id =
data.terraform_remote_state.database.outputs.rds["security_group"]
}
.....
Root Cause:
I was using remote states from dependent modules(database) in the above module as shown in the main.tf file. Though my old module had the remote states of dependent modules(database) in terraform0.11, I had actually run terraform apply on those dependent modules and their remote states are now actually in terraform0.13. Therefore, terraform cannot compare those remote states.
Fix:
I found the fix here
I needed to remove remote states using
terraform state rm data.terraform_remote_state.database
Then I could run terraform plan without errors!
Related
I'm trying to create a terraform backend in my TF script. The problem is that Im getting errors that the variables are not allowed.
Here is my code:
# Configure the Azure provider
provider "azurerm" {
version = "~> 2.0"
}
# Create an Azure resource group
resource "azurerm_resource_group" "example" {
name = "RG-TERRAFORM-BACKEND"
location = "$var.location"
}
# Create an Azure storage account
resource "azurerm_storage_account" "example" {
name = "$local.backendstoragename"
resource_group_name = azurerm_resource_group.example.name
location = azurerm_resource_group.example.location
account_tier = "Standard"
account_replication_type = "LRS"
tags = "$var.tags"
}
# Create an Azure storage container
resource "azurerm_storage_container" "example" {
name = "example"
resource_group_name = azurerm_resource_group.example.name
storage_account_name = azurerm_storage_account.example.name
container_access_type = "private"
}
# Create a Terraform backend configuration
resource "azurerm_terraform_backend_configuration" "example" {
resource_group_name = azurerm_resource_group.example.name
storage_account_name = azurerm_storage_account.example.name
container_name = azurerm_storage_container.example.name
key = "terraform.tfstate"
}
# Use the backend configuration to configure the Terraform backend
terraform {
backend "azurerm" {
resource_group_name = azurerm_terraform_backend_configuration.example.resource_group_name
storage_account_name = azurerm_terraform_backend_configuration.example.storage_account_name
container_name = azurerm_terraform_backend_configuration.example.container_name
key = azurerm_terraform_backend_configuration.example.key
}
}
What am I doing wrong? All of a sudden Terraform init is giving me the following errors:
Error: Variables not allowed
│
│ on main.tf line 65, in terraform:
│ 65: key = azurerm_terraform_backend_configuration.example.key
│
│ Variables may not be used here.
╵
I get the above error for ALL lines.
What am I doing wrong?
I tried to refactor the
azurerm_terraform_backend_configuration.example.container_name
as an interpolation - i.e. "$.." - but that didn't get accepted.
Has anything changed in Terraform? This wasn't the case a few years ago.
I have not found this resource azurerm_terraform_backend_configuration in any of the terraform-provider-azurerm documentation.
Check this URL for search results.
https://github.com/hashicorp/terraform-provider-azurerm/search?q=azurerm_terraform_backend_configuration
I am not even aware of the resource azurerm_terraform_backend_configuration but As of now, terraform-provider-azurerm does not support variables in the backend configuration.
Official documentation on Azurerm Backend
and what you are trying here is creating a Chicken-Egg problem (if I Ignore "azurerm_terraform_backend_configuration"). The initialization of terraform code needs a remote backend and the remote backend requires not just initialization but also terraform apply to the resources which are not possible.
The following are two possible solutions.
1: Create the resources required by the backend manually on the portal and then use them in your backend config. ( values in spite of any data source or variables)
2: Create the resources with the local backend and then migrate the local backend config to the remote backend.
Step 2.1: Create backend resources with local backend initially.
Provider Config
terraform {
required_providers {
azurerm = {
source = "hashicorp/azurerm"
version = "~> 3.37.0"
}
}
required_version = ">= 1.1.0"
}
provider "azurerm" {
features {}
}
Backend resources
locals {
backendstoragename = "stastackoverflow001"
}
# variable defintions
variable "tags" {
type = map(string)
description = "(optional) Tags attached to resources"
default = {
used_case = "stastackoverflow"
}
}
# Create an Azure resource group
resource "azurerm_resource_group" "stackoverflow" {
name = "RG-TERRAFORM-BACKEND-STACKOVERFLOW"
location = "West Europe"
}
# Create an Azure storage account
resource "azurerm_storage_account" "stackoverflow" {
name = local.backendstoragename ## or "${local.backendstoragename}" but better is local.backendstoragename
location = azurerm_resource_group.stackoverflow.location
resource_group_name = azurerm_resource_group.stackoverflow.name
account_tier = "Standard"
account_replication_type = "LRS"
tags = var.tags ## or "${var.tags}" but better is var.tags
}
# Create an Azure storage container
resource "azurerm_storage_container" "stackoverflow" {
name = "stackoverflow"
storage_account_name = azurerm_storage_account.stackoverflow.name
container_access_type = "private"
}
Step 2.2: Apply the code with local backend.
terraform init
terraform plan # to view the plan
terraform apply -auto-approve # ignore `-auto-approve` if not desired auto approval on apply.
After applying you will get the message:
Apply complete! Resources: 3 added, 0 changed, 0 destroyed.
Step 2.3: Update the backend configuration from local to remote.
Provider Config
terraform {
required_providers {
azurerm = {
source = "hashicorp/azurerm"
version = "~> 3.37.0"
}
}
required_version = ">= 1.1.0"
## Add remote backend config.
backend "azurerm" {
resource_group_name = "RG-TERRAFORM-BACKEND-STACKOVERFLOW"
storage_account_name = "stastackoverflow001"
container_name = "stackoverflow"
key = "terraformstate"
}
}
Re-Initialize the terraform.
After adding your remote backend run ``terraform init -reconfigurecommand and then typeyes` to migrate your local backend to remote backend.
➜ variables_in_azurerm_backend git:(main) ✗ terraform init -reconfigure <aws:sre>
Initializing the backend...
Do you want to copy existing state to the new backend?
Pre-existing state was found while migrating the previous "local" backend to the
newly configured "azurerm" backend. No existing state was found in the newly
configured "azurerm" backend. Do you want to copy this state to the new "azurerm"
backend? Enter "yes" to copy and "no" to start with an empty state.
Enter a value: yes
Successfully configured the backend "azurerm"! Terraform will automatically
use this backend unless the backend configuration changes.
Initializing provider plugins...
- Reusing previous version of hashicorp/azurerm from the dependency lock file
- Using previously-installed hashicorp/azurerm v3.37.0
Terraform has been successfully initialized!
You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.
If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.
Now terraform should use the remote backend configured and also will be able to manage the resources created in the steps {2.1 && 2.2}. You can verify this by running terraform plan command and it should give No changes message.
No changes. Your infrastructure matches the configuration.
Terraform has compared your real infrastructure against your configuration and found no differences, so no changes are
needed.
One more Side Note: Version constraints inside provider configuration blocks are deprecated and will be removed in a future version of Terraform
Special Consideraions: Use a different container key and directory for your other infrastructure terraform configurations to avoid accidental destruction of the storage account used for the backend config.
UPDATED
I am trying to provision multiple SQL databases in Azure using Terraform.
My child module has the following code that provisions a SQL database:
providers.tf
// default provider
provider "azurerm" {
alias = "main"
features {}
}
// The provider that can access the storage account to store diagnostics
provider "azurerm" {
alias = "storage_account"
features {}
}
sql_db.tf
resource "azurerm_mssql_database" "default" {
name = var.name
base_name = var.base_name
...
tags = var.tags
provider = azurerm.main
}
data.tf
data "azurerm_storage_account" "storage" {
name = var.storage_account_name
resource_group_name = var.storage_account_rg
provider = azurerm.storage_account
}
I am calling this module in my main.tf file as follows where I want to provision multiple SQL databases using a for_each:
module "sql_db" {
for_each = var.sql_db
source = "...../sql_db.git"
base_name = each.value.base_name
name = each.value.name
providers = {
azurerm.main = azurerm.main
azurerm.storage_account = azurerm.storage_account
}
}
provider "azurerm" {
features {}
version = "=2.20.0"
}
// default provider
provider "azurerm" {
alias = "main"
features {}
}
provider "azurerm" {
alias = "storage_account"
features {}
}
When I run plan, I get the following error:
Error: Module does not support for_each
on main.tf line 35, in module "sql_db":
35: for_each = var.sql_db
Module "sql_db" cannot be used with for_each because it contains a nested
provider configuration for "azurerm.main", at
.terraform\modules\sql_db\providers.tf:2,10-19.
This module can be made compatible with for_each by changing it to receive all
of its provider configurations from the calling module, by using the
"providers" argument in the calling module block.
Error: Module does not support for_each
on main.tf line 35, in module "sql_db":
35: for_each = var.sql_db
Module "sql_db" cannot be used with for_each because it contains a nested
provider configuration for "azurerm.storage_account", at
.terraform\modules\sql_db\providers.tf:8,10-19.
This module can be made compatible with for_each by changing it to receive all
of its provider configurations from the calling module, by using the
"providers" argument in the calling module block.
The simple answer is, it's not supported. From the Terraform documentation:
A module containing its own provider configurations is not compatible with the for_each, count, and depends_on arguments that were introduced in Terraform v0.13.
HashiCorp has been absolutely adamant that providers can never be declared dynamically, which is why they allow neither a for_each/count within a provider block, nor a for_each/count on a module that contains a provider block.
I'm working on an AWS multi-account setup with Terraform. I've got a master account that creates several sub-accounts, and in the sub-accounts I'm referencing the master's remote state to retrieve output values.
The terraform plan command is failing for this configuration in a test main.tf:
terraform {
required_version = ">= 0.12.0"
backend "s3" {
bucket = "bucketname"
key = "statekey.tfstate"
region = "us-east-1"
}
}
provider "aws" {
region = "us-east-1"
version = "~> 2.7"
}
data "aws_region" "current" {}
data "terraform_remote_state" "common" {
backend = "s3"
config {
bucket = "anotherbucket"
key = "master.tfstate"
}
}
With the following error:
➜ test terraform plan
Error: Unsupported block type
on main.tf line 20, in data "terraform_remote_state" "common":
20: config {
Blocks of type "config" are not expected here. Did you mean to define argument
"config"? If so, use the equals sign to assign it a value.
From what I can tell from the documentation, this should be working… what am I doing wrong?
➜ test terraform -v
Terraform v0.12.2
+ provider.aws v2.14.0
Seems the related document isn't updated after upgrade to 0.12.x
As the error prompt, add = after config
data "terraform_remote_state" "common" {
backend = "s3"
config = {
bucket = "anotherbucket"
key = "master.tfstate"
}
}
If the problem is fixed, recommend to raise a PR to update the document, then others can avoid the same issue again.
We are trying to create Terraform modules for below activities in AWS, so that we can use them where ever that is required.
VPC creation
Subnets creation
Instance creation etc.
But while creating these modules we have to define the provider in all above listed modules. So we decided to create one more module for provider so that we can call that provider module in other modules (VPC, Subnet, etc.).
Issue in above approach is that it is not taking provider value, and asking for the user input for region.
Terraform configuration is as follow:
$HOME/modules/providers/main.tf
provider "aws" {
region = "${var.region}"
}
$HOME/modules/providers/variables.tf
variable "region" {}
$HOME/modules/vpc/main.tf
module "provider" {
source = "../../modules/providers"
region = "${var.region}"
}
resource "aws_vpc" "vpc" {
cidr_block = "${var.vpc_cidr}"
tags = {
"name" = "${var.environment}_McD_VPC"
}
}
$HOME/modules/vpc/variables.tf
variable "vpc_cidr" {}
variable "environment" {}
variable "region" {}
$HOME/main.tf
module "dev_vpc" {
source = "modules/vpc"
vpc_cidr = "${var.vpc_cidr}"
environment = "${var.environment}"
region = "${var.region}"
}
$HOME/variables.tf
variable "vpc_cidr" {
default = "192.168.0.0/16"
}
variable "environment" {
default = "dev"
}
variable "region" {
default = "ap-south-1"
}
Then when running terraform plan command at $HOME/ location it is not taking provider value and instead asking for the user input for region.
I need help from the Terraform experts, what approach we should follow to address below concerns:
Wrap provider in a Terraform module
Handle multiple region use case using provider module or any other way.
I knew a long time back that it wasn't possible to do this because Terraform built a graph that required a provider for any resource before it included any dependencies and it didn't used to be possible to force a dependency on a module.
However since Terraform 0.8 it is now possible to set a dependency on modules with the following syntax:
module "network" {
# ...
}
resource "aws_instance" "foo" {
# ...
depends_on = ["module.network"]
}
However, if I try that with your setup by changing modules/vpc/main.tf to look something like this:
module "aws_provider" {
source = "../../modules/providers"
region = "${var.region}"
}
resource "aws_vpc" "vpc" {
cidr_block = "${var.vpc_cidr}"
tags = {
"name" = "${var.environment}_McD_VPC"
}
depends_on = ["module.aws_provider"]
}
And run terraform graph | dot -Tpng > graph.png against it it looks like the graph doesn't change at all from when the explicit dependency isn't there.
This seems like it might be a potential bug in the graph building stage in Terraform that should probably be raised as an issue but I don't know the core code base well enough to spot where the change needs to be.
For our usage we use symlinks heavily in our Terraform code base, some of which is historic from before Terraform supported other ways of doing things but could work for you here.
We simply define the provider in a single .tf file (such as environment.tf) along with any other generic config needed for every place you would ever run Terraform (ie not at a module level) and then symlink this into each location. That allows us to define the provider in a single place with overridable variables if necessary.
Step 1
Add region alias in the main.tf file where you gonna execute the terraform plan.
provider "aws" {
region = "eu-west-1"
alias = "main"
}
provider "aws" {
region = "us-east-1"
alias = "useast1"
}
Step 2
Add providers block inside your module definition block
module "lambda_edge_rule" {
providers = {
aws = aws.useast1
}
source = "../../../terraform_modules/lambda"
tags = var.tags
}
Step 3
Define "aws" as providers inside your module. ( source = ../../../terraform_modules/lambda")
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = ">= 2.7.0"
}
}
}
resource "aws_lambda_function" "lambda" {
function_name = "blablabla"
.
.
.
.
.
.
.
}
Note: Terraform version v1.0.5 as of now.
If a terraform script uses a module that has outputs, it's possible to access those module outputs in using the -module option for the terraform output command:
$ terraform output --help
Usage: terraform output [options] [NAME]
Reads an output variable from a Terraform state file and prints
the value. If NAME is not specified, all outputs are printed.
Options:
-state=path Path to the state file to read. Defaults to
"terraform.tfstate".
-no-color If specified, output won't contain any color.
-module=name If specified, returns the outputs for a
specific module
-json If specified, machine readable output will be
printed in JSON format
If I store that state file in S3 or some such, I can then reference the outputs of the main script by using the terraform_remote_state data provider.
data "terraform_remote_state" "base_networking" {
backend = "s3"
config {
bucket = "${var.remote_state_bucket}"
region = "${var.remote_state_region}"
key = "${var.networking_remote_state_key}"
}
}
resource "aws_instance" "my_instance" {
subnets = "${data.terraform_remote_state.base_networking.vpc_id}"
}
Is it possible to access the module outputs that are present in the state file as well? I'm looking for something like "${data.terraform_remote_state.base_networking.module.<module_name>.<output>}" or similar.
Yes, you can access remote state outputs from your own modules. You just need to "propagate" the outputs.
E.g., let's say you have something like this, your base_networking infrastructure, which contains a module for creating your VPC, and you want that VPC ID to be accessible via remote state:
base_networking/
main.tf
outputs.tf
vpc/
main.tf
outputs.tf
In base_networking/main.tf you create your VPC using your base_networking/vpc module:
module "vpc" {
source = "./vpc"
region = "${var.region}"
name = "${var.vpc_name}"
cidr = "${var.vpc_cidr}"
}
In base_networking/vpc/outputs.tf in your module you have an id output:
output "id" {
value = "${aws_vpc.vpc.id}"
}
In base_networking/outputs.tf you also have a vpc_id output that propagates module.vpc.id:
output "vpc_id" {
value = "${module.vpc.id}"
With that you can now access vpc_id using something like:
data "terraform_remote_state" "base_networking" {
backend = "s3"
config = {
bucket = "${var.remote_state_bucket}"
region = "${var.remote_state_region}"
key = "${var.networking_remote_state_key}"
}
}
[...]
vpc_id = "${data.terraform_remote_state.base_networking.vpc_id}"