Unable to create terraform backend - Variables not allowed - azure

I'm trying to create a terraform backend in my TF script. The problem is that Im getting errors that the variables are not allowed.
Here is my code:
# Configure the Azure provider
provider "azurerm" {
version = "~> 2.0"
}
# Create an Azure resource group
resource "azurerm_resource_group" "example" {
name = "RG-TERRAFORM-BACKEND"
location = "$var.location"
}
# Create an Azure storage account
resource "azurerm_storage_account" "example" {
name = "$local.backendstoragename"
resource_group_name = azurerm_resource_group.example.name
location = azurerm_resource_group.example.location
account_tier = "Standard"
account_replication_type = "LRS"
tags = "$var.tags"
}
# Create an Azure storage container
resource "azurerm_storage_container" "example" {
name = "example"
resource_group_name = azurerm_resource_group.example.name
storage_account_name = azurerm_storage_account.example.name
container_access_type = "private"
}
# Create a Terraform backend configuration
resource "azurerm_terraform_backend_configuration" "example" {
resource_group_name = azurerm_resource_group.example.name
storage_account_name = azurerm_storage_account.example.name
container_name = azurerm_storage_container.example.name
key = "terraform.tfstate"
}
# Use the backend configuration to configure the Terraform backend
terraform {
backend "azurerm" {
resource_group_name = azurerm_terraform_backend_configuration.example.resource_group_name
storage_account_name = azurerm_terraform_backend_configuration.example.storage_account_name
container_name = azurerm_terraform_backend_configuration.example.container_name
key = azurerm_terraform_backend_configuration.example.key
}
}
What am I doing wrong? All of a sudden Terraform init is giving me the following errors:
Error: Variables not allowed
│
│ on main.tf line 65, in terraform:
│ 65: key = azurerm_terraform_backend_configuration.example.key
│
│ Variables may not be used here.
╵
I get the above error for ALL lines.
What am I doing wrong?
I tried to refactor the
azurerm_terraform_backend_configuration.example.container_name
as an interpolation - i.e. "$.." - but that didn't get accepted.
Has anything changed in Terraform? This wasn't the case a few years ago.

I have not found this resource azurerm_terraform_backend_configuration in any of the terraform-provider-azurerm documentation.
Check this URL for search results.
https://github.com/hashicorp/terraform-provider-azurerm/search?q=azurerm_terraform_backend_configuration
I am not even aware of the resource azurerm_terraform_backend_configuration but As of now, terraform-provider-azurerm does not support variables in the backend configuration.
Official documentation on Azurerm Backend
and what you are trying here is creating a Chicken-Egg problem (if I Ignore "azurerm_terraform_backend_configuration"). The initialization of terraform code needs a remote backend and the remote backend requires not just initialization but also terraform apply to the resources which are not possible.
The following are two possible solutions.
1: Create the resources required by the backend manually on the portal and then use them in your backend config. ( values in spite of any data source or variables)
2: Create the resources with the local backend and then migrate the local backend config to the remote backend.
Step 2.1: Create backend resources with local backend initially.
Provider Config
terraform {
required_providers {
azurerm = {
source = "hashicorp/azurerm"
version = "~> 3.37.0"
}
}
required_version = ">= 1.1.0"
}
provider "azurerm" {
features {}
}
Backend resources
locals {
backendstoragename = "stastackoverflow001"
}
# variable defintions
variable "tags" {
type = map(string)
description = "(optional) Tags attached to resources"
default = {
used_case = "stastackoverflow"
}
}
# Create an Azure resource group
resource "azurerm_resource_group" "stackoverflow" {
name = "RG-TERRAFORM-BACKEND-STACKOVERFLOW"
location = "West Europe"
}
# Create an Azure storage account
resource "azurerm_storage_account" "stackoverflow" {
name = local.backendstoragename ## or "${local.backendstoragename}" but better is local.backendstoragename
location = azurerm_resource_group.stackoverflow.location
resource_group_name = azurerm_resource_group.stackoverflow.name
account_tier = "Standard"
account_replication_type = "LRS"
tags = var.tags ## or "${var.tags}" but better is var.tags
}
# Create an Azure storage container
resource "azurerm_storage_container" "stackoverflow" {
name = "stackoverflow"
storage_account_name = azurerm_storage_account.stackoverflow.name
container_access_type = "private"
}
Step 2.2: Apply the code with local backend.
terraform init
terraform plan # to view the plan
terraform apply -auto-approve # ignore `-auto-approve` if not desired auto approval on apply.
After applying you will get the message:
Apply complete! Resources: 3 added, 0 changed, 0 destroyed.
Step 2.3: Update the backend configuration from local to remote.
Provider Config
terraform {
required_providers {
azurerm = {
source = "hashicorp/azurerm"
version = "~> 3.37.0"
}
}
required_version = ">= 1.1.0"
## Add remote backend config.
backend "azurerm" {
resource_group_name = "RG-TERRAFORM-BACKEND-STACKOVERFLOW"
storage_account_name = "stastackoverflow001"
container_name = "stackoverflow"
key = "terraformstate"
}
}
Re-Initialize the terraform.
After adding your remote backend run ``terraform init -reconfigurecommand and then typeyes` to migrate your local backend to remote backend.
➜ variables_in_azurerm_backend git:(main) ✗ terraform init -reconfigure <aws:sre>
Initializing the backend...
Do you want to copy existing state to the new backend?
Pre-existing state was found while migrating the previous "local" backend to the
newly configured "azurerm" backend. No existing state was found in the newly
configured "azurerm" backend. Do you want to copy this state to the new "azurerm"
backend? Enter "yes" to copy and "no" to start with an empty state.
Enter a value: yes
Successfully configured the backend "azurerm"! Terraform will automatically
use this backend unless the backend configuration changes.
Initializing provider plugins...
- Reusing previous version of hashicorp/azurerm from the dependency lock file
- Using previously-installed hashicorp/azurerm v3.37.0
Terraform has been successfully initialized!
You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.
If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.
Now terraform should use the remote backend configured and also will be able to manage the resources created in the steps {2.1 && 2.2}. You can verify this by running terraform plan command and it should give No changes message.
No changes. Your infrastructure matches the configuration.
Terraform has compared your real infrastructure against your configuration and found no differences, so no changes are
needed.
One more Side Note: Version constraints inside provider configuration blocks are deprecated and will be removed in a future version of Terraform
Special Consideraions: Use a different container key and directory for your other infrastructure terraform configurations to avoid accidental destruction of the storage account used for the backend config.

Related

error deploying resources on azure using terraform cloud

I have deployed resources on Microsoft Azure using terraform. I'm using azure storage account container to save my terraform states. I tried to configure terraform cloud to automate the deployment but I get this error.
Error: A resource with the ID "/subscriptions/{SUBSCRIPTION_ID}/resourceGroups/msk-stage-keyvault" already exists - to be managed via Terraform this resource needs to be imported into the State. Please see the resource documentation for "azurerm_resource_group" for more information.
with module.keyvault.azurerm_resource_group.msk-keyvault
on ../../modules/az-keyvault/main.tf line 2, in resource "azurerm_resource_group" "msk-keyvault":
resource "azurerm_resource_group" "msk-keyvault" {
It seems that terraform cloud is not using my backend state in my provider.tf. How do I make terraform cloud use my backend state in provider.tf.
My Backend Provider
terraform {
required_providers {
azurerm = {
source = "hashicorp/azurerm"
version = "=2.91.0"
}
}
backend "azurerm" {
resource_group_name = "msk-configurations"
storage_account_name = "mskconfigurations"
container_name = "key-vault"
key = "stage.tfstate"
}
}
provider "azurerm" {
features {}
subscription_id = var.subscription
tenant_id = var.ternant_id
}
It looks like your main.tf has already existing keyvault state.
So Initially please check if you have already configured keyvault resource in main.tf file or if you have already imported the state.
If its already present in main.tf file , and if you are again giving it in the backend , please try to remove the one from main.tf file and then execute again.
Also please note that terraform backend needs azure storage account credentials before hand in order to store into the tfstate.
So please avoid creating storage simultaneously all account, container and then the keyvault resource to tfstate.
So if storage account is already created first , then terraform can refer it later in backend.
To preconfigure the storage account and container :
Example:
1. Create storage account and container one after the other instead in the same file:
provider "azurerm" {
features {}
}
data "azurerm_resource_group" "example" {
name = "resourcegroupname"
}
resource "azurerm_storage_account" "example" {
name = "<yourstorageaccountname>"
resource_group_name = data.azurerm_resource_group.example.name
location = data.azurerm_resource_group.example.location
account_tier = "Standard"
account_replication_type = "LRS"
}
resource "azurerm_storage_container" "example" {
name = "newterraformcont"
storage_account_name = azurerm_storage_account.example.name
container_access_type = "private"
}
Then create the msk-keyvault resource group and store the tfstate in container.
This is my already created state file in terraform (terraform.tf)
provider "azurerm" {
features {}
}
terraform {
# Configure Terraform State Storage
backend "azurerm" {
resource_group_name = "<resourcegroup>"
storage_account_name = "<storage-earliercreated>"
container_name = " newterraformcont "
key = "terraform.tfstate"
}
}
resource "azurerm_resource_group" " msk-keyvault" {
name = "<msk-keyvault>"
location = "west us"
}
Reference:
azurerm_resource_group | Resources | hashicorp/azurerm | Terraform
Registry
https://www.jorgebernhardt.com/terraform-backend

How to create a storage account for a remote state dynamically?

I know inorder to have a remote state in my terraform code, i must create a storage account,and a container. Usually, it is done manually, but i am trying to create the storage account and the container dynamically using the below code:
resource "azurerm_resource_group" "state_resource_group" {
name = "RG-Terraform-on-Azure"
location = "West Europe"
}
terraform {
backend "azurerm" {
resource_group_name = "RG-Terraform-on-Azure"
storage_account_name = azurerm_storage_account.state_storage_account.name
container_name = azurerm_storage_container.state_container.name
key = "terraform.tfstate"
}
}
resource "azurerm_storage_account" "state_storage_account" {
name = random_string.storage_account_name.result
resource_group_name = azurerm_resource_group.state_resource_group.name
location = azurerm_resource_group.state_resource_group.location
account_tier = "Standard"
account_replication_type = "LRS"
tags = {
environment = "staging"
}
}
resource "azurerm_storage_container" "state_container" {
name = "vhds"
storage_account_name = azurerm_storage_account.state_storage_account.name
container_access_type = "private"
}
resource "random_string" "storage_account_name" {
length = 14
lower = true
numeric = false
upper = false
special = false
}
But, the above code complains that:
│ Error: Variables not allowed
│
│ on main.tf line 11, in terraform:
│ 11: storage_account_name = azurerm_storage_account.state_storage_account.name
│
│ Variables may not be used here.
So,I already know that the i cannot use variables in the backend block, however i am wondering if there is a solution which enable me to create the storage account and the container dynamically and store the state file in there ?
Point:
i have already seen this question, but the .conf file did not work for me!
This can't be done in the same Terraform file. The backend has to exist before anything else. Terraform requires the backend to exist when you run terraform init. The backend is accessed to read the state as the very first step Terraform performs when you do a plan or apply, before any resources are actually created.
In the past I've automated the creation of the storage backend using a CLI tool. If you wanted to automate it with terraform it would have to be in a separate Terraform workspace, but then where would the backend for that workspace be?
In general, it doesn't really work to create the backend in Terraform.

Terraform Error : If multiple configurations are required, set the "alias" argument for alternative configurations [provider-azure]

Here im trying to add resources in azure portal in Terraform
I've tried setting alias but right after I made some changes in configuration file and I run terraform init command it throws an error like this
can anyone help me with this as I am newly working with Terraform and azure
NOTE: This is the error message I am getting
DUPLICATE PROVIDER CONFIGURATION
A DEFAULT (NON-ALIASED) PROVIDER CONFIGURATION FOR AZURERM WAS ALREADY
GIVEN AT MAIN.TF 12,1-19 . IF MULTIPLE CONFIGURATIONS ARE REQUIRED SET
THE "ALIAS" ARGUMENT FOR ALTERNATIVE CONFIGURATION.
As you have already initialized terraform provider and azurerm provider in above . Again initializing it will error out with duplicate provider configuration.
So Please remove this block:
terraform {
required_providers {
azurerm = {
source = "hashicorp/azurerm"
version = "~> 2.74"
}
}
required_version = ">= 0.14.9"
}
And you can use directly the below:
provider "azurerm" {
features {}
}
resource "azurerm_resource_group" "RG" {
name = "myRG"
location = "WestUS2"
}
resource "azurerm_virtual_network" "vnet" {
name = "myvnet"
location = azurerm_resource_group.RG.location
resource_group_name = azurerm_resource_group.RG.name
address_space = ["10.0.0.0/16"]
}

How to use the terraform state in another azure subscription

I am deploying an azure infrastructure with Terraform. The terraform state will be stored in a subscription which will be different from the main deployment subscription. I am using alias in provider declaration. My terraform code is like below:
terraform {
required_providers {
azurerm = {
source = "hashicorp/azurerm"
version = "~> 2.38.0"
}
}
backend "azurerm" {
resource_group_name = "resourcegroup_name"
storage_account_name = "storageaccount_name"
container_name = "mystate"
key = "tfstatename1.tfstate"
}
}
provider "azurerm" {
features {}
}
provider "azurerm" {
features {}
alias = "second_subscription"
subscription_id = var.second_subscription_id
}
My terraform state should be stored in the subscription with alias.
How can i achieve that?
I don't think the azurerm backend configuration is taking input from the azurerm provider configuration. To some extent you could say it applies its own authentication mechanism. But there are some features they share nevertheless: e.g. both are capable of using the Azure CLI security context.
In order to explicitly target a subscription id for your backend configuration, you must add it to the backend configuration block. Like so:
backend "azurerm" {
resource_group_name = "resourcegroup_name"
storage_account_name = "storageaccount_name"
container_name = "mystate"
key = "tfstatename1.tfstate"
subscription_id = "091f1800-0de3-4fef-831a-003a74ce245f"
}
Reference: https://developer.hashicorp.com/terraform/language/settings/backends/azurerm

Switch terraform 0.12.6 to 0.13.0 gives me provider["registry.terraform.io/-/null"] is required, but it has been removed

I manage state in remote terraform-cloud
I have downloaded and installed the latest terraform 0.13 CLI
Then I removed the .terraform.
Then I ran terraform init and got no error
then I did
➜ terraform apply -var-file env.auto.tfvars
Error: Provider configuration not present
To work with
module.kubernetes.module.eks-cluster.data.null_data_source.node_groups[0] its
original provider configuration at provider["registry.terraform.io/-/null"] is
required, but it has been removed. This occurs when a provider configuration
is removed while objects created by that provider still exist in the state.
Re-add the provider configuration to destroy
module.kubernetes.module.eks-cluster.data.null_data_source.node_groups[0],
after which you can remove the provider configuration again.
Releasing state lock. This may take a few moments...
This is the content of the module/kubernetes/main.tf
###################################################################################
# EKS CLUSTER #
# #
# This module contains configuration for EKS cluster running various applications #
###################################################################################
module "eks_label" {
source = "git::https://github.com/cloudposse/terraform-null-label.git?ref=master"
namespace = var.project
environment = var.environment
attributes = [var.component]
name = "eks"
}
#
# Local computed variables
#
locals {
names = {
secretmanage_policy = "secretmanager-${var.environment}-policy"
}
}
data "aws_eks_cluster" "cluster" {
name = module.eks-cluster.cluster_id
}
data "aws_eks_cluster_auth" "cluster" {
name = module.eks-cluster.cluster_id
}
provider "kubernetes" {
host = data.aws_eks_cluster.cluster.endpoint
cluster_ca_certificate = base64decode(data.aws_eks_cluster.cluster.certificate_authority.0.data)
token = data.aws_eks_cluster_auth.cluster.token
load_config_file = false
version = "~> 1.9"
}
module "eks-cluster" {
source = "terraform-aws-modules/eks/aws"
cluster_name = module.eks_label.id
cluster_version = var.cluster_version
subnets = var.subnets
vpc_id = var.vpc_id
worker_groups = [
{
instance_type = var.cluster_node_type
asg_max_size = var.cluster_node_count
}
]
tags = var.tags
}
# Grant secretmanager access to all pods inside kubernetes cluster
# TODO:
# Adjust implementation so that the policy is template based and we only allow
# kubernetes access to a single key based on the environment.
# we should export key from modules/secrets and then grant only specific ARN access
# so that only production cluster is able to read production secrets but not dev or staging
# https://docs.aws.amazon.com/secretsmanager/latest/userguide/auth-and-access_identity-based-policies.html#permissions_grant-get-secret-value-to-one-secret
resource "aws_iam_policy" "secretmanager-policy" {
name = local.names.secretmanage_policy
description = "allow to read secretmanager secrets ${var.environment}"
policy = file("modules/kubernetes/policies/secretmanager.json")
}
#
# Attache the policy to k8s worker role
#
resource "aws_iam_role_policy_attachment" "attach" {
role = module.eks-cluster.worker_iam_role_name
policy_arn = aws_iam_policy.secretmanager-policy.arn
}
#
# Attache the S3 Policy to Workers
# So we can use aws commands inside pods easily if/when needed
#
resource "aws_iam_role_policy_attachment" "attach-s3" {
role = module.eks-cluster.worker_iam_role_name
policy_arn = "arn:aws:iam::aws:policy/AmazonS3FullAccess"
}
All credits for this fix go to the one mentioning this on the cloudposse slack channel:
terraform state replace-provider -auto-approve -- -/null registry.terraform.io/hashicorp/null
This fixed my issue with this error, on to the next error. All to upgrade a version on terraform.
For us we updated all the provider URLs which we were using in the code like below:
terraform state replace-provider 'registry.terraform.io/-/null' \
'registry.terraform.io/hashicorp/null'
terraform state replace-provider 'registry.terraform.io/-/archive' \
'registry.terraform.io/hashicorp/archive'
terraform state replace-provider 'registry.terraform.io/-/aws' \
'registry.terraform.io/hashicorp/aws'
I would like to be very specific with replacement so I used the broken URL while replacing the new one.
To be more specific this is only with terraform 13
https://www.terraform.io/docs/providers/index.html#providers-in-the-terraform-registry
This error arises when there’s an object in the latest Terraform state that is no longer in the configuration but Terraform can’t destroy it (as would normally be expected) because the provider configuration for doing so also isn’t present.
Solution:
This should arise only if you’ve recently removed object
"data.null_data_source" along with the provider "null" block. To
proceed with this you’ll need to temporarily restore that provider "null" block, run terraform apply to have Terraform destroy object data "null_data_source", and then you can remove the provider "null"
block because it’ll no longer be needed.

Resources