Terraform import: invalid index - terraform

I'm trying to replace deprecated resource 'azurerm_sql_server' with the 'azurerm_mssql_server' and got an 'invalid index' error in the case.
A simplified demo of the situation (with Terraform v0.14.5 and v1.0.5):
terraform {
required_providers {
azurerm = {
source = "hashicorp/azurerm"
version = "=2.49.0"
}
}
}
provider "azurerm" {
features {}
}
locals {
prefix = toset(["primary", "secondary"])
}
resource "azurerm_resource_group" "rg" {
name = "rgtest"
location = "Canada Central"
}
resource "random_password" "sql_admin_password" {
length = 16
special = true
number = true
upper = true
lower = true
min_special = 2
min_numeric = 2
min_upper = 2
min_lower = 2
}
resource "azurerm_sql_server" "instance" {
for_each = local.prefix
name = "${each.value}-sqlsvr"
location = azurerm_resource_group.rg.location
resource_group_name = azurerm_resource_group.rg.name
version = "12.0"
administrator_login = "ssadmin"
administrator_login_password = random_password.sql_admin_password.result
}
locals {
primary_sql_srv = azurerm_sql_server.instance["primary"].name
secondary_sql_srv = azurerm_sql_server.instance["secondary"].name
}
# other TF resources using local.primary_sql_srv and local.secondary_sql_srv
The infrastructure has been deployed and no intention to re-create the database servers so we need to change the resource and import existing servers. According to Terraform document, this can be done with 'terraform state rm' and 'terraform import' command.
So,
Change the configuration script
...
resource "azurerm_mssql_server" "instance" {
...
locals {
primary_sql_srv = azurerm_mssql_server.instance["primary"].name
secondary_sql_srv = azurerm_mssql_server.instance["secondary"].name
}
# other TF resources using local.primary_sql_srv and local.secondary_sql_srv
Remove azurerm_sql_server resource from the state file, both are successful
terraform.exe state rm azurerm_sql_server.instance[`\`"primary`\`"]
terraform.exe state rm azurerm_sql_server.instance[`\`"secondary`\`"]
Import the primary database server
> terraform.exe import azurerm_mssql_server.instance[`\`"primary`\`"] "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/rgtest/providers/Microsoft.Sql/servers/primary-sqlsvr"
azurerm_mssql_server.instance["primary"]: Importing from ID "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/rgtest/providers/Microsoft.Sql/servers/primary-sqlsvr"...
azurerm_mssql_server.instance["primary"]: Import prepared!
Prepared azurerm_mssql_server for import
azurerm_mssql_server.instance["primary"]: Refreshing state... [id=/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/rgtest/providers/Microsoft.Sql/servers/primary-sqlsvr]
Import successful!
The resources that were imported are shown above. These resources are now in
your Terraform state and will henceforth be managed by Terraform.
Current state list
❯ terraform.exe state list
azurerm_mssql_server.instance["primary"]
azurerm_resource_group.rg
random_password.sql_admin_password
Import the secondary database server
> terraform.exe import azurerm_mssql_server.instance[`\`"secondary`\`"] "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/rgtest/providers/Microsoft.Sql/servers/secondary-sqlsvr"
azurerm_mssql_server.instance["secondary"]: Importing from ID "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/rgtest/providers/Microsoft.Sql/servers/secondary-sqlsvr"...
azurerm_mssql_server.instance["secondary"]: Import prepared!
Prepared azurerm_mssql_server for import
azurerm_mssql_server.instance["secondary"]: Refreshing state... [id=/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/rgtest/providers/Microsoft.Sql/servers/secondary-sqlsvr]
Error: Invalid index
on C:\Work\Projects\2021\20210812RenameResource\t1env\main.tf line 49, in locals:
49: secondary_sql_srv = azurerm_mssql_server.instance["secondary"].name
|----------------
| azurerm_mssql_server.instance is object with 1 attribute "primary"
The given key does not identify an element in this collection value.
The state refresh for the 2nd import hit the locals block and failed due to no 'secondary' server resource.
So to me, this is a deadlock, I cannot import the 'secondary' server resource because of the refresh error and the refresh error was caused by the lack of the 'secondary' server resource.
Two ways I can think of:
Manually add the 'secondary' server resource to the state file, which is definitely not proper
Remove the 'locals' block which is OK in the demo but lots of changes in real code for dependencies.
Any thoughts, please? Thank you.

This is a bug in terraform import that was introduced in version 0.13. During a terraform import execution, it will attempt to validate the local variables in the config containing the resource namespace against non-existent state. There are basically three workarounds for this:
Downgrade temporarily to Terraform 0.12 where this bug does not exist.
This is really not a great option because the version(s) is stored in the state, and you may be locked out of executing terraform CLI commands against the state synced with a later version.
Manually modify the state to contain the resources.
Also really not a great option because this could corrupt the state and/or cause other obvious issues with malformation.
Temporarily comment out the relevant locals and any code referencing the local variable values.
This is what I always ended up using. You can do a multiline comment in the /* ... */ style around the relevant locals that references the exported resource attributes of the imported resource, and you will also need to do so in any other areas of the config that reference the local variables. You can then uncomment the code once the imports are complete.

I had the same issue with terraform 1.2.8, updated to 1.3.0 and the import was successfull. Looks like this last version resolves the issue ?
edit : As stated in Terraform v1.3 changelog :
terraform import: Better handling of resources or modules that use for_each, and situations where data resources are needed to complete the operation. (#31283)
This description matches my situation 100% (use of for_each, data blocs & modules).

Related

A resource with the ID "/subscriptions/.../resourceGroups/rgaks/providers/Microsoft.Storage/storageAccounts/aksbackupstorage" already exists

I have created storage account and container inside it to store my aks backup using terraform. I have created child module for the storage account and container.I am creating the storage account and continer calling it from root module from "main.tf".i have created two modules such as module Ex:"module aks_backup_storage" and "module aks_backup_conatiner". The module have been created successfully after applying the terraform command "terraform apply" but at the end it is raising the following errors are mentioned bellow in the console.
A resource with the ID "/subscriptions/...../resourceGroups/rg-aks-backup-storage/providers/Microsoft.Storage/storageAccounts/aksbackupstorage" already exists - to be managed via Terraform this resource needs to be imported into the State. Please see the resource documentation for "azurerm_storage_account" for more information.
failed creating container: failed creating container: containers.Client#Create: Failure sending request: StatusCode=409 -- Original Error: autorest/azure: Service returned an error. Status=<nil> Code="ContainerAlreadyExists" Message="The specified container already exists.\nRequestId:f.........\nTime:2022-12-28T12:52:08.2075701Z"
root module
module "aks_backup_storage" {
source = "../modules/aks_pv_storage_container"
rg_aks_backup_storage = var.rg_aks_backup_storage
aks_backup_storage_account = var.aks_backup_storage_account
aks_backup_container = var.aks_backup_container
rg_aks_backup_storage_location = var.rg_aks_backup_storage_location
aks_backup_retention_days = var.aks_backup_retention_days
}
Child module
resource "azurerm_resource_group" "rg_aksbackup" {
name = var.rg_aks_backup_storage
location = var.rg_aks_backup_storage_location
}
resource "azurerm_storage_account" "aks_backup_storage" {
name = var.aks_backup_storage_account
resource_group_name = var.rg_aks_backup_storage
location = var.rg_aks_backup_storage_location
account_kind = "StorageV2"
account_tier = "Standard"
account_replication_type = "ZRS"
access_tier = "Hot"
enable_https_traffic_only = true
min_tls_version = "TLS1_2"
#allow_blob_public_access = false
allow_nested_items_to_be_public = false
is_hns_enabled = false
blob_properties {
container_delete_retention_policy {
days = var.aks_backup_retention_days
}
delete_retention_policy {
days = var.aks_backup_retention_days
}
}
}
# Different container can be created for the different backup level such as cluster, Namespace, PV
resource "azurerm_storage_container" "aks_backup_container" {
#name = "aks-backup-container"
name = var.aks_backup_container
#storage_account_name = azurerm_storage_account.aks_backup_storage.name
storage_account_name= var.aks_backup_storage_account
}
I have also try to import the resource using the bellow command
terraform import ['azurerm_storage_account.aks_backup_storage /subscriptions/a3ae2713-0218-47a2-bb72-c6198f50c56f/resourceGroups/rg-aks-backup-storage/providers/Microsoft.Storage/storageAccounts/aksbackupstorage']
But it also saying ZSH command not found
zsh: no matches found: [azurerm_storage_account.aks_backup_storage /subscriptions/a3ae2713-0218-47a2-bb72-c6198f50c56f/resourceGroups/rg-aks-backup-storage/providers/Microsoft.Storage/storageAccounts/aksbackupstorage/]
I had no issue when i was creating the resources using the same code without declaring any module.
Now, I have several modules in root module in the main.tf file
here is my project directory structure
I really appreciate any suggestions thanks in advance
variable.tf
variable "rg_aks_backup_storage" {
type = string
description = "storage account name for the backup"
default = "rg-aks-backup-storage"
}
variable "aks_backup_storage_account" {
type = string
description = "storage account name for the backup"
default = "aksbackupstorage"
}
variable "aks_backup_container" {
type = string
description = "storage container name "
#default = "aks-storage-container"
default = "aksbackupstoragecontaine"
}
variable "rg_aks_backup_storage_location" {
type = string
default = "westeurope"
}
variable "aks_backup_retention_days" {
type = number
default = 90
}
The storage account name that you use must be unique within Azure (see naming restrictions). I checked, and the default storage account name that you are using is already taken. Have you tried changing the name to something you know is unique?
A way to consistently do this would be to add a random suffix at the end of the name, eg:
resource "random_string" "random_suffix" {
length = 6
special = false
upper = false
}
resource "azurerm_storage_account" "aks_backup_storage" {
name = join("", tolist([var.aks_backup_storage_account, random_string.random_suffix.result]))
...
}
I also received the same error when I tried to run terraform apply while creating container registry.
It usually occurs when the terraform state file (running locally) does not match the Portal terraform state file resources.
Even if a resource with the same name does not exist in the portal or resource group, it will appear in terraform state files if it was deployed previously. If you've received these types of issues, verify the tf state file in portal. If the resource is not existent, use the following command to import it.
Note: Validate that the terraform state files are identical. Run terraform init & terraform apply once you are done with the changes.
To resolve this error, Use terraform import .
Here I tried to import the container registry (let's say) and it imported successfully.
terraform import azurerm_container_registry.acr "/subscriptions/<subscriptionID>/resourceGroups/<resourceGroup>/providers/Microsoft.ContainerRegistry/registries/xxxxcontainerRegistry1"
Output:
After that I applied terraform apply and successfully deployed the resource without any errors.
Deployed successfully in Portal:

terraform import aws_vpc.main vpc-0ea21234

When I import an existing vpc into terraform I get the following error when running my terraform script.
Error: error deleting EC2 VPC (vpc-0ea21234): DependencyViolation: The vpc 'vpc-0ea21234' has dependencies and cannot be deleted.
status code: 400, request id: 4630706a-5378-4e72-a3df-b58c8c7fd09b
Why is it trying to delete the VPC? How can I make it use the VPC? I'll post the main file I used to make the import and the import command below.
import command (this succeeds)
terraform import aws_vpc.main vpc-0ea21234
main file
provider "aws" {
region = "us-gov-west-1"
profile = "int-pipe"
}
# terraform import aws_vpc.main vpc-0ea21234
resource "aws_vpc" "main" {
name = "cred-int-pipeline-vpc"
cidr = "10.25.0.0/25"
}
# terraform import aws_subnet.main subnet-030de2345
resource "aws_subnet" "main" {
vpc_id = "vpc-0ea21234"
name = "cred-int-pipeline-subnet-d-az2"
cidr = "10.25.0.96/27"
}
You probably have differences between what you have in your terraform configuration file and the resource you imported.
Run terraform plan, it will show you exactly what the differences are and the reason why it must be deleted/re-created.
After that, either manually change the resource in AWS or in your configuration file, if both existing resource and file match configuration, than delete and re-create won't be triggered.

Switch terraform 0.12.6 to 0.13.0 gives me provider["registry.terraform.io/-/null"] is required, but it has been removed

I manage state in remote terraform-cloud
I have downloaded and installed the latest terraform 0.13 CLI
Then I removed the .terraform.
Then I ran terraform init and got no error
then I did
➜ terraform apply -var-file env.auto.tfvars
Error: Provider configuration not present
To work with
module.kubernetes.module.eks-cluster.data.null_data_source.node_groups[0] its
original provider configuration at provider["registry.terraform.io/-/null"] is
required, but it has been removed. This occurs when a provider configuration
is removed while objects created by that provider still exist in the state.
Re-add the provider configuration to destroy
module.kubernetes.module.eks-cluster.data.null_data_source.node_groups[0],
after which you can remove the provider configuration again.
Releasing state lock. This may take a few moments...
This is the content of the module/kubernetes/main.tf
###################################################################################
# EKS CLUSTER #
# #
# This module contains configuration for EKS cluster running various applications #
###################################################################################
module "eks_label" {
source = "git::https://github.com/cloudposse/terraform-null-label.git?ref=master"
namespace = var.project
environment = var.environment
attributes = [var.component]
name = "eks"
}
#
# Local computed variables
#
locals {
names = {
secretmanage_policy = "secretmanager-${var.environment}-policy"
}
}
data "aws_eks_cluster" "cluster" {
name = module.eks-cluster.cluster_id
}
data "aws_eks_cluster_auth" "cluster" {
name = module.eks-cluster.cluster_id
}
provider "kubernetes" {
host = data.aws_eks_cluster.cluster.endpoint
cluster_ca_certificate = base64decode(data.aws_eks_cluster.cluster.certificate_authority.0.data)
token = data.aws_eks_cluster_auth.cluster.token
load_config_file = false
version = "~> 1.9"
}
module "eks-cluster" {
source = "terraform-aws-modules/eks/aws"
cluster_name = module.eks_label.id
cluster_version = var.cluster_version
subnets = var.subnets
vpc_id = var.vpc_id
worker_groups = [
{
instance_type = var.cluster_node_type
asg_max_size = var.cluster_node_count
}
]
tags = var.tags
}
# Grant secretmanager access to all pods inside kubernetes cluster
# TODO:
# Adjust implementation so that the policy is template based and we only allow
# kubernetes access to a single key based on the environment.
# we should export key from modules/secrets and then grant only specific ARN access
# so that only production cluster is able to read production secrets but not dev or staging
# https://docs.aws.amazon.com/secretsmanager/latest/userguide/auth-and-access_identity-based-policies.html#permissions_grant-get-secret-value-to-one-secret
resource "aws_iam_policy" "secretmanager-policy" {
name = local.names.secretmanage_policy
description = "allow to read secretmanager secrets ${var.environment}"
policy = file("modules/kubernetes/policies/secretmanager.json")
}
#
# Attache the policy to k8s worker role
#
resource "aws_iam_role_policy_attachment" "attach" {
role = module.eks-cluster.worker_iam_role_name
policy_arn = aws_iam_policy.secretmanager-policy.arn
}
#
# Attache the S3 Policy to Workers
# So we can use aws commands inside pods easily if/when needed
#
resource "aws_iam_role_policy_attachment" "attach-s3" {
role = module.eks-cluster.worker_iam_role_name
policy_arn = "arn:aws:iam::aws:policy/AmazonS3FullAccess"
}
All credits for this fix go to the one mentioning this on the cloudposse slack channel:
terraform state replace-provider -auto-approve -- -/null registry.terraform.io/hashicorp/null
This fixed my issue with this error, on to the next error. All to upgrade a version on terraform.
For us we updated all the provider URLs which we were using in the code like below:
terraform state replace-provider 'registry.terraform.io/-/null' \
'registry.terraform.io/hashicorp/null'
terraform state replace-provider 'registry.terraform.io/-/archive' \
'registry.terraform.io/hashicorp/archive'
terraform state replace-provider 'registry.terraform.io/-/aws' \
'registry.terraform.io/hashicorp/aws'
I would like to be very specific with replacement so I used the broken URL while replacing the new one.
To be more specific this is only with terraform 13
https://www.terraform.io/docs/providers/index.html#providers-in-the-terraform-registry
This error arises when there’s an object in the latest Terraform state that is no longer in the configuration but Terraform can’t destroy it (as would normally be expected) because the provider configuration for doing so also isn’t present.
Solution:
This should arise only if you’ve recently removed object
"data.null_data_source" along with the provider "null" block. To
proceed with this you’ll need to temporarily restore that provider "null" block, run terraform apply to have Terraform destroy object data "null_data_source", and then you can remove the provider "null"
block because it’ll no longer be needed.

Issues with importing resource to terraform module

I have created module named workflow for Azure LogicApp
Here is the module:
resource "azurerm_logic_app_workflow" "LogicApp" {
name = "${var.LogicAppName}"
location = "${var.LogicAppLocation}"
resource_group_name = "${var.rgName}"
workflow_schema = "${var.schema}"
}
In workflow_schema i'm specifying the path to my file which contains the logicapp configuration
In main config.tf I have the following setup:
module "workflow" {
source = "./modules/workflow/"
LogicAppName = "LaName"
LogicAppLocation = "${azurerm_resource_group.rg.location}"
rgName = "${azurerm_resource_group.rg.name}"
schema = "${file("./path/to/the/file/LaName")}"
}
So, when I'm running terraform init and terraform plan everything works perfectly fine.
Since my logic app was created earlier, I want to import it so that terraform apply won't overwrite it.
I am running the following command and it returns the error:
terraform import module.workflow.azurerm_logic_app_workflow.LogicApp /subscriptions/mySubscriptionID/resourceGroups/myRgName/providers/Microsoft.Logic/workflows/LaName
Error: Import to non-existent module
module.workflow is not defined in the configuration. Please add configuration
for this module before importing into it.
I'm using the following versions of software:
Terraform v0.12.13
+ provider.azurerm v1.28.0
If anyone has any ideas why terraform import fails, please share them.
I see the issue in naming.
Your module is named workflow and in your configuration you name the resource workflow too, this should be different. You are trying to import into the resource directly.
Example:
module "workflow-azure" {
source = "./modules/workflow/"
LogicAppName = "LaName"
LogicAppLocation = "${azurerm_resource_group.rg.location}"
rgName = "${azurerm_resource_group.rg.name}"
schema = "${file("./path/to/the/file/LaName")}"
}
and the import should be then
terraform import module.workflow-azure.azurerm_logic_app_workflow.LogicApp /subscriptions/mySubscriptionID/resourceGroups/myRgName/providers/Microsoft.Logic/workflows/LaName

Terraform module output not resolving for input variables in other module

Terraform Version
Terraform v0.11.11
+ provider.azurerm v1.21.0
Terraform Configuration Files
I have left many required fields for brevity (all other config worked before I added the connection strings).
# modules/function/main.tf
variable "conn-value" {}
locals {
conn = "${map("name", "mydb", "value", "${var.conn-value}", "type", "SQLAzure")}"
}
resource "azurerm_function_app" "functions" {
connection_string = "${list(local.conn)}"
# ...
}
# modules/db/main.tf
# ... other variables declared
resource "azurerm_sql_server" "server" {
# ...
}
output "connection-string" {
value = "Server=tcp:${azurerm_sql_server.server.fully_qualified_domain_name},1433;Initial Catalog=${var.catalog};Persist Security Info=False;User ID=${var.login};Password=${var.login-password};MultipleActiveResultSets=False;Encrypt=True;TrustServerCertificate=False;Connection Timeout=200;"
}
# main.tf
module "my_db" {
source = "modules/db"
}
module "my_app" {
source = "modules/function"
conn-value = "${module.my_db.connection-string}"
# ...
}
Expected Behavior on terraform plan
The module.my_db.connection-string output resolves to a string when passed to the my_app conn-value variable and is able to be used in the map/list passed to the azurerm_function_app.functions.connection_string variable.
Actual Behavior on terraform plan
I get this error:
module.my_app.azurerm_function_app.functions: connection_string: should be a list
If I replace "${var.conn-value}" in the modules/function/main.tf locals with just a string, it works.
Update
In response to to this comment, I updated the script above with the connection string construction.
I finally found the GitHub issue that references the problem I am having (I found the issue through this gist comment). This describes the problem exactly:
Assigning values to nested blocks is not supported, but appears to work in certain cases due to a number of coincidences...
This limitation is in <= v0.11, but is apparently fixed in v0.12 with the dynamic block.

Resources