Terraform tries to destroy the imported AWS IAM user - terraform

I'm trying to import existing AWS IAM Users into terraform.
(Right now, there exist both terraform managed and unmanaged IAM Users.)
So I ran the following import for the unmanaged IAM user like userA as follow and it was successful and can see it in tfstate file.
terraform import aws_iam_user.create-users userA
Then I added userA in my terraform variable to run to see if terraform acknowledged, but it keeps trying to destroy the userA when running terraform apply.
How can I set Terraform managed userA without destroying?
My terraform scripts are like the following.
# main.tf
resource "aws_iam_user" "create-users" {
for_each = var.users
name = each.key
}
#user.auto.tfvars
users = {
"testuser1" = {
group = ["Admin"]
},
"testuser2" = {
group = ["User"]
},
"userA" = {
group = ["not managed"]
}
}
EDIT:
I tried direct resource without for_each as follow, then terraform recognized.
#direct.tf
resource "aws_iam_user" "existing" {
name = "userA"
}

You should include the keyname.
terraform import 'aws_iam_user.create-users["userA"]' userA

Related

A resource with the ID "/subscriptions/.../resourceGroups/rgaks/providers/Microsoft.Storage/storageAccounts/aksbackupstorage" already exists

I have created storage account and container inside it to store my aks backup using terraform. I have created child module for the storage account and container.I am creating the storage account and continer calling it from root module from "main.tf".i have created two modules such as module Ex:"module aks_backup_storage" and "module aks_backup_conatiner". The module have been created successfully after applying the terraform command "terraform apply" but at the end it is raising the following errors are mentioned bellow in the console.
A resource with the ID "/subscriptions/...../resourceGroups/rg-aks-backup-storage/providers/Microsoft.Storage/storageAccounts/aksbackupstorage" already exists - to be managed via Terraform this resource needs to be imported into the State. Please see the resource documentation for "azurerm_storage_account" for more information.
failed creating container: failed creating container: containers.Client#Create: Failure sending request: StatusCode=409 -- Original Error: autorest/azure: Service returned an error. Status=<nil> Code="ContainerAlreadyExists" Message="The specified container already exists.\nRequestId:f.........\nTime:2022-12-28T12:52:08.2075701Z"
root module
module "aks_backup_storage" {
source = "../modules/aks_pv_storage_container"
rg_aks_backup_storage = var.rg_aks_backup_storage
aks_backup_storage_account = var.aks_backup_storage_account
aks_backup_container = var.aks_backup_container
rg_aks_backup_storage_location = var.rg_aks_backup_storage_location
aks_backup_retention_days = var.aks_backup_retention_days
}
Child module
resource "azurerm_resource_group" "rg_aksbackup" {
name = var.rg_aks_backup_storage
location = var.rg_aks_backup_storage_location
}
resource "azurerm_storage_account" "aks_backup_storage" {
name = var.aks_backup_storage_account
resource_group_name = var.rg_aks_backup_storage
location = var.rg_aks_backup_storage_location
account_kind = "StorageV2"
account_tier = "Standard"
account_replication_type = "ZRS"
access_tier = "Hot"
enable_https_traffic_only = true
min_tls_version = "TLS1_2"
#allow_blob_public_access = false
allow_nested_items_to_be_public = false
is_hns_enabled = false
blob_properties {
container_delete_retention_policy {
days = var.aks_backup_retention_days
}
delete_retention_policy {
days = var.aks_backup_retention_days
}
}
}
# Different container can be created for the different backup level such as cluster, Namespace, PV
resource "azurerm_storage_container" "aks_backup_container" {
#name = "aks-backup-container"
name = var.aks_backup_container
#storage_account_name = azurerm_storage_account.aks_backup_storage.name
storage_account_name= var.aks_backup_storage_account
}
I have also try to import the resource using the bellow command
terraform import ['azurerm_storage_account.aks_backup_storage /subscriptions/a3ae2713-0218-47a2-bb72-c6198f50c56f/resourceGroups/rg-aks-backup-storage/providers/Microsoft.Storage/storageAccounts/aksbackupstorage']
But it also saying ZSH command not found
zsh: no matches found: [azurerm_storage_account.aks_backup_storage /subscriptions/a3ae2713-0218-47a2-bb72-c6198f50c56f/resourceGroups/rg-aks-backup-storage/providers/Microsoft.Storage/storageAccounts/aksbackupstorage/]
I had no issue when i was creating the resources using the same code without declaring any module.
Now, I have several modules in root module in the main.tf file
here is my project directory structure
I really appreciate any suggestions thanks in advance
variable.tf
variable "rg_aks_backup_storage" {
type = string
description = "storage account name for the backup"
default = "rg-aks-backup-storage"
}
variable "aks_backup_storage_account" {
type = string
description = "storage account name for the backup"
default = "aksbackupstorage"
}
variable "aks_backup_container" {
type = string
description = "storage container name "
#default = "aks-storage-container"
default = "aksbackupstoragecontaine"
}
variable "rg_aks_backup_storage_location" {
type = string
default = "westeurope"
}
variable "aks_backup_retention_days" {
type = number
default = 90
}
The storage account name that you use must be unique within Azure (see naming restrictions). I checked, and the default storage account name that you are using is already taken. Have you tried changing the name to something you know is unique?
A way to consistently do this would be to add a random suffix at the end of the name, eg:
resource "random_string" "random_suffix" {
length = 6
special = false
upper = false
}
resource "azurerm_storage_account" "aks_backup_storage" {
name = join("", tolist([var.aks_backup_storage_account, random_string.random_suffix.result]))
...
}
I also received the same error when I tried to run terraform apply while creating container registry.
It usually occurs when the terraform state file (running locally) does not match the Portal terraform state file resources.
Even if a resource with the same name does not exist in the portal or resource group, it will appear in terraform state files if it was deployed previously. If you've received these types of issues, verify the tf state file in portal. If the resource is not existent, use the following command to import it.
Note: Validate that the terraform state files are identical. Run terraform init & terraform apply once you are done with the changes.
To resolve this error, Use terraform import .
Here I tried to import the container registry (let's say) and it imported successfully.
terraform import azurerm_container_registry.acr "/subscriptions/<subscriptionID>/resourceGroups/<resourceGroup>/providers/Microsoft.ContainerRegistry/registries/xxxxcontainerRegistry1"
Output:
After that I applied terraform apply and successfully deployed the resource without any errors.
Deployed successfully in Portal:

terraform import aws_vpc.main vpc-0ea21234

When I import an existing vpc into terraform I get the following error when running my terraform script.
Error: error deleting EC2 VPC (vpc-0ea21234): DependencyViolation: The vpc 'vpc-0ea21234' has dependencies and cannot be deleted.
status code: 400, request id: 4630706a-5378-4e72-a3df-b58c8c7fd09b
Why is it trying to delete the VPC? How can I make it use the VPC? I'll post the main file I used to make the import and the import command below.
import command (this succeeds)
terraform import aws_vpc.main vpc-0ea21234
main file
provider "aws" {
region = "us-gov-west-1"
profile = "int-pipe"
}
# terraform import aws_vpc.main vpc-0ea21234
resource "aws_vpc" "main" {
name = "cred-int-pipeline-vpc"
cidr = "10.25.0.0/25"
}
# terraform import aws_subnet.main subnet-030de2345
resource "aws_subnet" "main" {
vpc_id = "vpc-0ea21234"
name = "cred-int-pipeline-subnet-d-az2"
cidr = "10.25.0.96/27"
}
You probably have differences between what you have in your terraform configuration file and the resource you imported.
Run terraform plan, it will show you exactly what the differences are and the reason why it must be deleted/re-created.
After that, either manually change the resource in AWS or in your configuration file, if both existing resource and file match configuration, than delete and re-create won't be triggered.

Issue provisioning Databricks workspace resources using Terraform

I have defined resource to provision databricks workspace on Azure using Terraform as follows which consumes the list ( of inputs from tfvar file for # of workspaces) and provision them.
resource "azurerm_databricks_workspace" "workspace" {
for_each = { for r in var.databricks_workspace_list : r.workspace_nm => r}
name = each.key
resource_group_name = each.value.resource_group_name
location = each.value.location
sku = "standard"
tags = {
Environment = "Dev"
}
}
I am trying to create additional resource as below
resource "databricks_instance_pool" "smallest_nodes" {
instance_pool_name = "Smallest Nodes"
min_idle_instances = 0
max_capacity = 300
node_type_id = data.databricks_node_type.smallest.id // data block is defined
idle_instance_autotermination_minutes = 10
}
To create instance pool, I need to pass workspace id in databricks provider block as below
provider "databricks" {
azure_client_id= *******
azure_client_secret= *******
azure_tenant_id= *******
azure_workspace_resource_id = azurerm_databricks_workspace.workspace.id
}
But when I do terraform plan, it fails with below error
Missing resource instance key
azure_workspace_resource_id = azurerm_databricks_workspace.workspace.id
Because azure_workspace_resource_id = azurerm_databricks_workspace has for_each set, its attribute must be accessed on specific instances.
For example, to correlate indices , use :
azurerm_databricks_workspace[each.key]
I couldnt use for_each in provider block, also not able to find out way to index workspace id in provider block.
Appreciate your inputs.
TF version : 0.13
Azure RM : 3.10.0
Databricks : 0.5.7
The problem is that you can create multiple workspaces when you're using for_each in the azurerm_databricks_workspace resource. But your provider block is trying to refer to a "generic" resource instance, so it's complaining.
The solution here would be either:
Remove for_each if you're creating just one workspace
instead of azurerm_databricks_workspace.workspace.id, you need to refer azurerm_databricks_workspace.workspace[<name>].id where the <name> is the specific instance of Databricks from the list of workspaces.
P.S. Your databricks_instance_pool resource doesn't have explicit depends_on, so the operation will fail with authentication error as described here.

Terraform -- access resource which is created in the same main.tf file

I'm created a secret manager AWS resource and want to access it's ARN in the same main.tf file.
This is my terraform main.tf
variable "ENV" {}
variable "TAGS" {}
// SECRET MANAGER
resource "aws_secretsmanager_secret" "service_name_sm" {
name = "service-name-sm-test"
tags = var.TAGS
}
// POLICY
resource "aws_iam_policy" "service_name_policy" {
name = "${var.service_name_policy_name}-${var.ENV}"
path = "/"
policy = templatefile(
"${path.module}/templates/${var.service_name_policy_name}.tmpl", {
secrets_manager_arn = resource.aws_secretsmanager_secret.service_name_sm.arn
})
}
In the policy I create, I want to use the ARN of the aws_secretsmanager_secret resource I create.
When I run, terraform validate I get an error:
A managed resource "resource" "aws_secretsmanager_secret" has not been
declared in service_name.
How can I do that ?
You dont need to prefix the things with resource.. You have to reference it like this aws_secretsmanager_secret.service_name_sm.
policy = templatefile(
"${path.module}/templates/${var.service_name_policy_name}.tmpl", {
secrets_manager_arn = aws_secretsmanager_secret.service_name_sm.arn
})

Why does Terraform want to create the custom attribute again?

I am try to use the Vsphere_custom_attribute resource. On the first run on an VSphere it runs fine but on the second run I get the error below.
Do you have any ideas how to solve this ?
or did i just use it wrong ?
I use this Version of Terraform and Vsphere provider.
Terraform v0.12.12
provider.template v2.1.2
provider.vsphere v1.13.0
These are the code parts where i create the custom attributes and where i use it.
resource "vsphere_custom_attribute" "hostname" {
name = "hypervisor.hostname"
managed_object_type = "VirtualMachine"
}
resource "vsphere_virtual_machine" "vm" {
...
custom_attributes = "${map(vsphere_custom_attribute.hostname.id, "${var.vsphere_name}${var.vsphere_dom}" )}"
...
}
Error :
Error: could not create custom attribute: ServerFaultCode: The name 'hypervisor.hostname' already exists.
on main.tf line 32, in resource "vsphere_custom_attribute" "hostname":
32: resource "vsphere_custom_attribute" "hostname" {
terraform plan:
I do not understand why Terraform want to create it the custom attribute as it already exists.
An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
+ create
Terraform will perform the following actions:
# vsphere_custom_attribute.hostname will be created
+ resource "vsphere_custom_attribute" "hostname" {
+ id = (known after apply)
+ managed_object_type = "VirtualMachine"
+ name = "hypervisor.hostname"
}
This is because you're saying create the 'resource' instead of pulling the 'data' from vsphere. I had this confusion as well until understanding the 'vsphere_datastore' input required.
Try something like this (im using octopus deploy for variable replacement so ignore that im using invalid #{, should be ${ or just straight string for 0.12.x
...
data "vsphere_custom_attribute" "consul_backend_path" {
name = "consul.backend.path"
}
...
resource "vsphere_virtual_machine" "windows_virtual_machine" {
...
custom_attributes = map(data.vsphere_custom_attribute.consul_backend_path.id, "custom_attribute_value")
...
}
Keep in mind, this requires the resource of the custom attribute to be in vcenter before this virtual_machine resource is created. Otherwise you will need to do a logic test to validate if the tag is required.

Resources