I am running an application inside pod in aks, that is provisioning a aws service using terraform, if that pod is deleted or stopped in between when provisioning is going on, the terraform state file is corrupted.
When I try provisioning again using that state file I get apply error. Some of the resources got provisioned but are not updated in the state file. I get following error.
Error: Error applying plan:
1 error(s) occurred:
* aws_s3_bucket.examplebucket: 1 error(s) occurred:
* aws_s3_bucket.examplebucket: Error creating S3 bucket: BucketAlreadyOwnedByYou: Your previous request to create the named bucket succeeded and you already own it.
status code: 409
so how to update the state file so I can use it again?
Not sure the error is related to kubernetes resources and pods.
But if you need refresh / recreate the bucket, you can taint it.
terraform taint aws_s3_bucket.examplebucket
terraform plan
terraform apply
Let me know if this is helpful or not.
If terraform tries to create something that already exists, you will need to import the resource into terraform.
Every kind of terraform resource, in this case a aws_s3_bucket, has listed in its documentation, at the bottom, on how to import it.
In this case, the following command should do the trick:
terraform import aws_s3_bucket.bucket **BUCKETNAME**
Replace BUCKETNAME with your bucket.
Related
I am currently writing a terraform custom provider for my internship project
The main aim of the project is to provision an environment (consisting of several servers) on a private cloud platform.
I created a custom provider using the CRUD operations.
Let's say I want to delete the whole resource by removing the whole resource block in main.tf. I want to do a terraform plan to see if the deletion is valid. It will do a read of the actual environment and see if the serverState of the server is active. I want it to throw a warning/error in terraform plan is serverState is not empty.
So the main issue right now is that the terraform plan only compares the difference between the configuration in main.tf and the actual tf statefile. So the error checking code in the delete function is not executed. It is only executed when terraform apply is used.
Is there any way to throw the error in terraform plan to warn the user before they use the terraform apply command
I'm using terraform version v0.12.14. whenever I use terraform init I'm unable to see the terraform provider in my folder(hidden files are enabled to visible). also the plan command always fails with the error " no changes, infrastructure is up-to-date". kindly help me since I'm getting these errors I am not able to create the resource group in azure.
I've created an Azure Storage Account to be used as the backend state store for Terraform, and I was able to write to this from an Azure DevOps pipeline running Terraform commands. I can see the container in the Storage Account and confirm that it has the state content from the pipeline execution in it with that same key. However, when I try to run Terraform "manually" using the same backend store, I'm getting an error that it cannot find that container:
$ terraform init -backend-config="storage_account_name=<redacted>" -backend-config="container_name=auto-api-tfstate" -backend-config="access_key=<redacted>" -backend-config="key=dev-internal2/dev-internal2.tfstate:us"
Initializing modules...
Initializing the backend...
Backend configuration changed!
Terraform has detected that the configuration specified for the backend
has changed. Terraform will now check for existing state in the backends.
Error: Error inspecting states in the "azurerm" backend:
storage: service returned error: StatusCode=404, ErrorCode=ContainerNotFound, ErrorMessage=The specified container does not exist.
RequestId:89a9b361-a01e-00b1-0fb4-ba5d51000000
Time:2021-10-06T13:18:41.2460433Z, RequestInitiated=Wed, 06 Oct 2021 13:18:40 GMT, RequestId=89a9b361-a01e-00b1-0fb4-ba5d51000000, API Version=2016-05-31, QueryParameterName=, QueryParameterValue=
Prior to changing backends, Terraform inspects the source and destination
states to determine what kind of migration steps need to be taken, if any.
Terraform failed to load the states. The data in both the source and the
destination remain unmodified. Please resolve the above error and try again.
My main.tf file has simply:
terraform {
backend "azurerm" {}
}
As mentioned, this same terraform init command worked when invoked in a Bash script in an ADO pipeline, so not sure what may be the issue. Any suggestions for debugging this appreciated.
Uncovered the issue ... there was state information in the .terraform folder which conflicted with the new backend. Once I cleared that out, the "terraform init" command worked as expected.
To create a new Terraform state file, I'm importing some legacy Azure resources into a Terraform configuration with a local state file. As expected, my import syntax is as follows:
terraform import <Terraform Resource Name>.<Resource Label> <Azure Resource ID>
Unfortunately, for one of my resources, I used the wrong Resource Label and had to rename it. I then performed a Terraform plan, but as the earlier Resource Label had already been written into the state file, the plan now displays the message that a resource will be destroyed when applied. Just to clarify, the resource with the corrected Resource Label is also written into the state file, so there's no danger of it being destroyed in Azure.
I however want to clean up the local state file by removing the orphaned resource, so when I ran a Terraform Plan, it reports that:
"No changes. Your infrastructure matches the configuration"
How can I do so safely without compromising my state file or the legacy resources?
As suggested by #luk2302, I tested the command in my environment after I imported a keyvault resource to my local state file and then tried to removed only the keyvault resource from terraform state and it was successful.
The resource is only removed from state file and it can be still found in portal.
Reference:
Command: state rm - Terraform by HashiCorp
I am trying to reapply my changes using terraform apply but when I am doing it again , it gives me error with resource already exists and stops the deployment .
Example:
Error: AlreadyExistsException: An alias with the name arn:aws:kms:us-east-1:490449857273:alias/continuedep-cmk-us-east-1 already exists
status code: 400, request id: 4447fd20-d33b-4c87-891e-cc5e09cc6108
on ../../../modules/kms_cmk/main.tf line 11, in resource "aws_kms_alias" "keyalias":
11: resource "aws_kms_alias" "keyalias" {
Error: Error creating DB Subnet Group: DBSubnetGroupAlreadyExists: The DB subnet group 'continuedep-sbg' already exists.
status code: 400, request id: 97d662b6-79d4-4fde-aaf7-a2f3e5a0bd9e
on ../../../modules/rds-postgres/main.tf line 2, in resource "aws_db_subnet_group" "generic_db_subnet_group":
2: resource "aws_db_subnet_group" "generic_db_subnet_group" {
Likewise i get errors with many other existing resources.I want to avoid/ignore such errors and continue my deployment .
What other way i can use from which I can restart my terraform resource deployment from where it is interrupted in the middle.
My terraform version is :
Terraform v0.12.9
The errors are returned by the API the Terraform provider is calling.
Possible causes of this could be:
you ( or someone else ) have executed your Terraform code and you don't have a shared / updated state
someone have created them manually
a Terraform destroy failed in a way that deleted the resources for the API but failed to save the update state
solutions depends on what you need. You can:
delete those resources from your Terraform code to stop managing them with it
delete those resources from the API ( cloud provider ) and recreate them with Terraform
Perform a terraform import of those resources and remove the terraform code that is trying to recreate them (NOT RECOMMENDED)
use terraform apply --target=xxx to apply only resources you need to apply (NOT RECOMMENDED)