terraform apply fails while creating a resource that exists already - terraform

I am working on terraform with openstack as the cloud provider. I have a deploy.tf script that creates a role:
resource "openstack_identity_role_v3" "role_example" {
name = "creator"
}
My finding on how terraform creates resources:
If the role does not exist in openstack, terraform creates one with
no problem.
If the role exists in openstack and is created with the
same terraform script, ie. terraform.state has an entry of it,
terraform returns with no errors.
my issue is: if I remove the state file or if the role is created out of bands either manually or by some other terraform script.I get the following error:
* openstack_identity_role_v3.role_example: Error creating OpenStack role: Expected HTTP response code [201] when accessing [POST https://<example-openstack-url>/v3/roles], but got 409 instead
{"error": {"message": "Conflict occurred attempting to store role - Duplicate Entry", "code": 409, "title": "Conflict"}}
I am trying to find a workaround so that if the role doesn't exist, terraform apply creates it, and if it already exists, despite having created manually or by any other terraform deployment script, terraform skips its creation and throw no error.

Related

GCP Rename db name mannually and add to terraform

I have created a DB via Terraform and after that, I have removed and created it again with another name.
And when I changed the DB name in Terraform it says that:
Error: Error creating Database: googleapi: Error 400: Invalid request: failed to create database YYY. Detail: pq: database "YYY" already exists., invalid
I have restored a backup file and don't want to remove and recreate again via Terraform.
Do you know how can I fix it?
I have removed and created it again with another name.
Did you do that manually or using Terraform? In case you did it manually, try importing YYY database to the terraform state using terraform import. More info here.

Accessing existing resource info from new resources

My header might not have summed up correctly my question.
So I have a terraform stack that creates a resource group, and a keyvault, amongst other things. This has already been ran and the resources exist.
I am now adding another resource to this same terraform stack. Namely a mysql server. Now I know if I just re-run the stack it will check the state file and just add my mysql server.
However as part of this mysql server creation I am providing a password and I want to write this password to the keyvault that already exists.
if I was doing this from the start my terraform would look like:
resource "azurerm_key_vault_secret" "sqlpassword" {
name = "flagr-mysql-password"
value = random_password.sqlpassword.result
key_vault_id = azurerm_key_vault.shared_kv.id
depends_on = [
azurerm_key_vault.shared_kv
]
}
however I believe as the keyvault already exists this would error as it wouldn't know this value azurerm_key_vault.shared_kv.id unless I destroy the keyvault and allow terraform to recreate it. is that correct?
I could replace azurerm_key_vault.shared_kv.id with the actual resource ID from azure, but then if I were to ever run this stack to create a new environment it would be writing the value into my old keyvault I presume?
I have done this recently for AWS deployment, you would do terraform import on azurerm_key_vault.shared_kv resource to bring it under terraform management and then you would be able to able to deploy azurerm_key_vault_secret
To import: you will need to build the resource azurerm_key_vault.shared_kv (this will require a few iterations).

Error message while deleting google_kms_crypto_key resource

I am managing kms keys and key rings with gcp terraform provider
resource "google_kms_key_ring" "vault" {
name = "vault"
location = "global"
}
resource "google_kms_crypto_key" "vault_init" {
name = "vault"
key_ring = google_kms_key_ring.vault.self_link
rotation_period = "100000s" #
}
When I ran this for the first time, I was able to create the keys and keyrings successfully and doing a terraform destroy allows the terraform code to execute successfully with out any errors.
The next time I do a terraform apply, I just use terraform import to import the resources from GCP and the code execution works fine.
But after a while, certain key version 1 was destroyed. Now everytime I do a terrafrom destroy, I get the below error
module.cluster_vault.google_kms_crypto_key.vault_init: Destroying... [id=projects/<MY-PROJECT>/locations/global/keyRings/vault/cryptoKeys/vault]
Error: googleapi: Error 400: The request cannot be fulfilled. Resource projects/<MY-PROJECT>/locations/global/keyRings/vault/cryptoKeys/vault/cryptoKeyVersions/1 has value DESTROYED in field crypto_key_version.state., failedPrecondition
Is there was way to suppress this particular error ? KeyVersions 1-3 are destroyed.
At present, Cloud KMS resources cannot be deleted. This is against Terraform's desired behavior to be able to completely destroy and re-create resources. You will need to use a different key name or key ring name to proceed.

How to ignore duplicate resource error during terraform apply?

I am trying to reapply my changes using terraform apply but when I am doing it again , it gives me error with resource already exists and stops the deployment .
Example:
Error: AlreadyExistsException: An alias with the name arn:aws:kms:us-east-1:490449857273:alias/continuedep-cmk-us-east-1 already exists
status code: 400, request id: 4447fd20-d33b-4c87-891e-cc5e09cc6108
on ../../../modules/kms_cmk/main.tf line 11, in resource "aws_kms_alias" "keyalias":
11: resource "aws_kms_alias" "keyalias" {
Error: Error creating DB Subnet Group: DBSubnetGroupAlreadyExists: The DB subnet group 'continuedep-sbg' already exists.
status code: 400, request id: 97d662b6-79d4-4fde-aaf7-a2f3e5a0bd9e
on ../../../modules/rds-postgres/main.tf line 2, in resource "aws_db_subnet_group" "generic_db_subnet_group":
2: resource "aws_db_subnet_group" "generic_db_subnet_group" {
Likewise i get errors with many other existing resources.I want to avoid/ignore such errors and continue my deployment .
What other way i can use from which I can restart my terraform resource deployment from where it is interrupted in the middle.
My terraform version is :
Terraform v0.12.9
The errors are returned by the API the Terraform provider is calling.
Possible causes of this could be:
you ( or someone else ) have executed your Terraform code and you don't have a shared / updated state
someone have created them manually
a Terraform destroy failed in a way that deleted the resources for the API but failed to save the update state
solutions depends on what you need. You can:
delete those resources from your Terraform code to stop managing them with it
delete those resources from the API ( cloud provider ) and recreate them with Terraform
Perform a terraform import of those resources and remove the terraform code that is trying to recreate them (NOT RECOMMENDED)
use terraform apply --target=xxx to apply only resources you need to apply (NOT RECOMMENDED)

how to handle corrupted terraform tfstate file

I am running an application inside pod in aks, that is provisioning a aws service using terraform, if that pod is deleted or stopped in between when provisioning is going on, the terraform state file is corrupted.
When I try provisioning again using that state file I get apply error. Some of the resources got provisioned but are not updated in the state file. I get following error.
Error: Error applying plan:
1 error(s) occurred:
* aws_s3_bucket.examplebucket: 1 error(s) occurred:
* aws_s3_bucket.examplebucket: Error creating S3 bucket: BucketAlreadyOwnedByYou: Your previous request to create the named bucket succeeded and you already own it.
status code: 409
so how to update the state file so I can use it again?
Not sure the error is related to kubernetes resources and pods.
But if you need refresh / recreate the bucket, you can taint it.
terraform taint aws_s3_bucket.examplebucket
terraform plan
terraform apply
Let me know if this is helpful or not.
If terraform tries to create something that already exists, you will need to import the resource into terraform.
Every kind of terraform resource, in this case a aws_s3_bucket, has listed in its documentation, at the bottom, on how to import it.
In this case, the following command should do the trick:
terraform import aws_s3_bucket.bucket **BUCKETNAME**
Replace BUCKETNAME with your bucket.

Resources