I am noticing this wierd error since I moved whole of my code from 1.42.0 provider version to 2.19.0. I am creating several keyvault secrets. Earlier it when I try to execute a plan after appplying once, it used to refresh the state and proceed, but now after updating the provider version, I am noticing the below error.
Error: A resource with the ID "https://mytestingvault.vault.azure.net/secrets/hub-access/060e71ecd1084cb5a6a496f77a2aea5c" already exists - to be managed via Terraform this resource needs to be imported into the State. Please see the resource documentation for "azurerm_key_vault_secret" for more information.
Additionally I have added lifecycle ignore changes to see if it could skip reading the vault secret changes but unfortunately the same error is shown. Only way to get rid of the error is to delete the secret. What am i wrong here?
lifecycle {
ignore_changes = [
value,name
]
}
The Terraform key vault documentation says:
Terraform will automatically recover a soft-deleted Key Vault during
Creation if one is found - you can opt out of this using the features
block within the Provider block.
You should configure your Terraform to stop recovering the softly deleted Key Vault like this:
provider "azurerm" {
features {
key_vault {
recover_soft_deleted_key_vaults = false
}
}
}
If you want Terraform to purge any softly deleted Key Vaults when using terraform destroy use this additional line:
provider "azurerm" {
features {
key_vault {
purge_soft_delete_on_destroy = true
recover_soft_deleted_key_vaults = false
}
}
}
You probably need to read up on the general topic of Terraform state management. This is not specific to your Key Vault secret. Some resource (your secret) exists that was not created by the terraform workspace that you are just executing. TF does not like that. So you either need to import this pre-existing resource into the state of this workspace, or delete it so that it can be re-created (and thereby managed)
The issue was that keyvault even though was deleted, we had to purge it via cli using aws cli purge.
Related
I have a diagnostic setting configured on my master db. As shown below in my main.tf
resource "azurerm_monitor_diagnostic_setting" "main" {
name = "Diagnostic Settings - Master"
target_resource_id = "${azurerm_mssql_server.main.id}/databases/master"
log_analytics_workspace_id = azurerm_log_analytics_workspace.main.id
log {
category = "SQLSecurityAuditEvents"
enabled = true
retention_policy {
enabled = false
}
}
metric {
category = "AllMetrics"
retention_policy {
enabled = false
}
}
lifecycle {
ignore_changes = [log, metric]
}
}
If I don't delete it before in the resource group before I run the Terraform, I get the error:
Diagnostic Settings - Master" already exists - to be managed via
Terraform this resource needs to be imported into the State
I know that if I delete the SQL Server the diagnostic setting remains - but I don't know why that is a problem with Terraform. I have also noticed that it is in my tfplan.
What could be the problem?
If I don't delete it before in the resource group before I run the
Terraform, I get the error:
Diagnostic Settings - Master" already exists - to be managed via Terraform this resource needs to be imported into the State
I know that if I delete the SQL Server the diagnostic setting remains but I don't know why that is a problem with Terraform.
If you have created the resource in Azure from a different way (i.e. Portal/Templates/CLI/Powershell), that means Terraform is not aware of resource already existing in Azure. So, during Terraform Plan, it shows you the plan what will be created from what you have written in main.tf. But when you run Terraform Apply the azurerm provider checks the resources names with the existing resources of the same resource providers and result in giving an error that it already exists and needs to be imported to be managed by Terraform.
Also if you have created everything from Terraform then doing a Terraform destroy deletes all the resources present on the main.tf.
Well, it's in the .tfplan and also it's in main.tf - so it's imported, right ?
If you mention the resource and its details in main.tf and .tfplan, it doesn't mean that you have imported the resource or Terraform gets aware of the resource. Terraform is only aware of the resources that are stored in the Terraform state file i.e. .tfstate.
So , to overcome the error that you get without deleting the resource from Portal, you will have to add the resource in the main.tf as you have already done and then use Terraform import command to import the Azure resource to Terraform State file like below:
terraform import azurerm_monitor_diagnostic_setting.example "{resourceID}|{DiagnosticsSettingsName}"
So, for you it will be like:
terraform import azurerm_monitor_diagnostic_setting.main "/subscriptions/<SubscriptionID>/resourceGroups/<ResourceGroupName>/providers/Microsoft.Sql/servers/<SQLServerName>/databases/master|Diagnostic Settings - Master"
After the Import is done, any changes you make from Terraform to that resource will get reflected in portal as well and you will be able to destroy the resource from terraform as well.
My header might not have summed up correctly my question.
So I have a terraform stack that creates a resource group, and a keyvault, amongst other things. This has already been ran and the resources exist.
I am now adding another resource to this same terraform stack. Namely a mysql server. Now I know if I just re-run the stack it will check the state file and just add my mysql server.
However as part of this mysql server creation I am providing a password and I want to write this password to the keyvault that already exists.
if I was doing this from the start my terraform would look like:
resource "azurerm_key_vault_secret" "sqlpassword" {
name = "flagr-mysql-password"
value = random_password.sqlpassword.result
key_vault_id = azurerm_key_vault.shared_kv.id
depends_on = [
azurerm_key_vault.shared_kv
]
}
however I believe as the keyvault already exists this would error as it wouldn't know this value azurerm_key_vault.shared_kv.id unless I destroy the keyvault and allow terraform to recreate it. is that correct?
I could replace azurerm_key_vault.shared_kv.id with the actual resource ID from azure, but then if I were to ever run this stack to create a new environment it would be writing the value into my old keyvault I presume?
I have done this recently for AWS deployment, you would do terraform import on azurerm_key_vault.shared_kv resource to bring it under terraform management and then you would be able to able to deploy azurerm_key_vault_secret
To import: you will need to build the resource azurerm_key_vault.shared_kv (this will require a few iterations).
How to resolve below error
Error: Provider configuration not present
To work with module.my_ec2.aws_instance.web[0] (orphan) its original provider
configuration at module.my_ec2.provider["registry.terraform.io/hashicorp/aws"]
is required, but it has been removed. This occurs when a provider
configuration is removed while objects created by that provider still exist in
the state. Re-add the provider configuration to destroy
module.my_ec2.aws_instance.web[0] (orphan), after which you can remove the
provider configuration again.
Releasing state lock. This may take a few moments...
Try adding something like this:
provider "aws" {
version = "3.10.0"
region = "eu-west-1"
profile = "default"
}
Then run terraform init
and try running a plan again.
I am managing kms keys and key rings with gcp terraform provider
resource "google_kms_key_ring" "vault" {
name = "vault"
location = "global"
}
resource "google_kms_crypto_key" "vault_init" {
name = "vault"
key_ring = google_kms_key_ring.vault.self_link
rotation_period = "100000s" #
}
When I ran this for the first time, I was able to create the keys and keyrings successfully and doing a terraform destroy allows the terraform code to execute successfully with out any errors.
The next time I do a terraform apply, I just use terraform import to import the resources from GCP and the code execution works fine.
But after a while, certain key version 1 was destroyed. Now everytime I do a terrafrom destroy, I get the below error
module.cluster_vault.google_kms_crypto_key.vault_init: Destroying... [id=projects/<MY-PROJECT>/locations/global/keyRings/vault/cryptoKeys/vault]
Error: googleapi: Error 400: The request cannot be fulfilled. Resource projects/<MY-PROJECT>/locations/global/keyRings/vault/cryptoKeys/vault/cryptoKeyVersions/1 has value DESTROYED in field crypto_key_version.state., failedPrecondition
Is there was way to suppress this particular error ? KeyVersions 1-3 are destroyed.
At present, Cloud KMS resources cannot be deleted. This is against Terraform's desired behavior to be able to completely destroy and re-create resources. You will need to use a different key name or key ring name to proceed.
I have worked with terraform before, where terraform can place the tfstate files in S3. Does terraform also support azure blob storage as a backend? What would be the commands to set the backend to be azure blob storage?
As of Terraform 0.7 (not currently released but you can compile from source) support for Azure blob storage has been added.
The question asks for some commands, so I'm adding a little more detail in case anyone needs it. I'm using Terraform v0.12.24 and azurerm provider v2.6.0. You need two things:
Create a storage account (general purpose v2) and a container for storing your states.
Configure your environment and your main.tf
As for the second point, your terraform block in main.tf should contain a "azurerm" backend:
terraform {
required_version = "=0.12.24"
backend "azurerm" {
storage_account_name = "abcd1234"
container_name = "tfstatecontainer"
key = "example.prod.terraform.tfstate"
}
provider "azurerm" {
version = "=2.6.0"
features {}
subscription_id = var.subscription_id
}
Before calling to plan or apply, init the ARM_ACCESS_KEY variable with a bash export:
export ARM_ACCESS_KEY=<storage access key>
Finally, run the init command:
terraform init
Now, if you run terraform plan you will see the tfstate created in the container. Azure has a file locking feature built in, in case anyone tries to update the state file at the same time.