So, if you have been working for some time with Terraform, probably you have faced this error message more than once:
Initializing the backend...
Successfully configured the backend "s3"! Terraform will automatically
use this backend unless the backend configuration changes.
Error refreshing state: state data in S3 does not have the expected content.
This may be caused by unusually long delays in S3 processing a previous state
update. Please wait for a minute or two and try again. If this problem
persists, and neither S3 nor DynamoDB are experiencing an outage, you may need
to manually verify the remote state and update the Digest value stored in the
DynamoDB table to the following value:
Usually, this is caused because something was screwed up during a destroy operation, and now there is a mismatch between the state and the lock.
The known solution for this is to delete the lock from DynamoDB and run terraform init again. And if it didn't resolve with that, also delete the tfstate from S3, which at this point doesn't have any data as the infrastructure was destroyed.
Surprisingly, neither is working now, and I don't have a clue why. There is no tfstate in the bucket, in fact, I even deleted every old version stored (bucket has versioning enabled). There is no lock in DynamoDB either.
Changing the tfstate name works without issues, but I can't change it as it would break the naming convention I'm using for all my tfstates.
So, any ideas what's going on here? On Friday, the infrastructure was deployed and destroyed without issues (as part of the destroy, I always check there is no lock left behind and delete the tfstate from S3). But today, I'm facing this error, and it's been a while already and can't figure it out..
Holy ****!
So, it turns out DynamoDB was being a serious troll here.
I searched for the key of the md5 of this tfstate, and it wasn't returned. But then I noticed there was a message about that there were more items to be returned... After clicking like 6 times on that button, it eventually returned a hidden lock for this tfstate.
Deleted it, and everything is back to normal again.
So, as summary, if you ever face this issue and you can't find the lock on DynamoDB... be sure that all items are returned in the query, as it can take many attempts to return them all.
I am currently trying to represent a new google cloud project in terraform. To complete this I still need to create the domain mapping of my cloudrun instance in terraform.
The domain mapping has already been created in the UI and Terraform cannot overwrite it. When I run terraform apply I get an error message that the domain mapping already exists.
I first tried to import the domain mapping with terraform import. This command returns a success message, as soon as I run terraform apply I get an error message that the domain mapping already exists but is flagged for deletion, I should try the command again after the complete deletion. A few hours later, the resource is still not deleted.
Next I tried to delete the domain mapping in the UI. This does not work, I get the message:
Delete domain mapping failed. Deletion in progress. Resource readiness deadline exceeded.
I have also tried it via the command line interface:
gcloud beta run domain-mappings delete --domain=\<domain\> --namespace=\<namespace\>
But I get the same error message as before.
Does anybody know another way of deleting / importing / fixing /workaround this problem, I'd appreciate any help!
I am using terraform cloud workspaces.
By mistake, I upload the wrong terraform state to my workspace. Now my terraform plan is using it, but I don’t want to use it since that is not the state I wanted to get.
Let me explain with this image: I want to use the state from 10 months ago and not the New State I got recently:
I want to go back to my old state since in the new one there are no some resources and then they are being recreated in my terraform plan.
I am trying to import every separate resource, by executing terraform import <RESOURCE-INSTANCE> <ID> command in this way:
terraform import azurerm_app_service_plan.fem-plan /subscriptions/d37e88d5-e443-4285-98c9-91bf40d514f9/resourceGroups/rhd-spec-prod-rhdhv-1mo5mz5r2o4f6/providers/Microsoft.Web/serverfarms/fem-plan-afd1
Acquiring state lock. This may take a few moments...
azurerm_app_service_plan.fem-plan: Importing from ID "/subscriptions/d37e88d5-e443-4285-98c9-91bf40d514f9/resourceGroups/rhd-spec-prod-rhdhv-1mo5mz5r2o4f6/providers/Microsoft.Web/serverfarms/fem-plan-afd1"...
azurerm_app_service_plan.fem-plan: Import prepared!
Prepared azurerm_app_service_plan for import
azurerm_app_service_plan.fem-plan: Refreshing state... [id=/subscriptions/d37e88d5-e443-4285-98c9-91bf40d514f9/resourceGroups/rhd-spec-prod-rhdhv-1mo5mz5r2o4f6/providers/Microsoft.Web/serverfarms/fem-plan-afd1]
Error: Cannot import non-existent remote object
While attempting to import an existing object to
azurerm_app_service_plan.fem-plan, the provider detected that no object exists
with the given id. Only pre-existing objects can be imported; check that the
id is correct and that it is associated with the provider's configured region
or endpoint, or use "terraform apply" to create a new remote object for this
resource.
Releasing state lock. This may take a few moments...
But I got in the output, that resource does not exist, because terraform is using my latest New state where that resource is not included.
How can I use my old 10 months ago state?
If someone can point me out in the right path I will appreciate it.
Maybe you can use the command terraform refresh to update the state file to match the current state of the resources. You can get the details of the command here.
Closed. This question is not about programming or software development. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 26 days ago.
Improve this question
When we try to run a terraform script with remote state handling we get the below issue:
Error refreshing state: state data in S3 does not have the expected content.
This may be caused by unusually long delays in S3 processing a previous state
update. Please wait for a minute or two and try again. If this problem
persists, and neither S3 nor DynamoDB are experiencing an outage, you may need
to manually verify the remote state and update the Digest value stored in the
DynamoDB table to the following value
There are 3 possible workarounds depending on your specific scenario in order to solve it:
Case 1
If you have a backup of your AWS S3 terrform.tfstate file you could restored your state backend "s3" {key = "path/to/terraform.tfstate"} to an older version. Re try terraform init and validate if it works well.
Case 2
Remove the out-of-sync entry in AWS DynamoDB Table. There will be a LockID entry in the table containing state and expected checksum which you should delete and that will be re-generated after re-running terraform init.
IMPORTANT CONSIDERATIONS:
During this process you'll lose the lock state protection, which prevents others from acquiring the lock and potentially 2 people could simultaneously be updating the same resources corrupting your state.
Please consider using terraform refresh command (https://www.terraform.io/docs/commands/refresh.html), which is used to reconcile the state Terraform knows about (via its state file) with the real-world infrastructure. This can be used to detect any drift from the last-known state, and to update the state file.
Delete DynamoDB LockID table entry -> Screenshoot:
Case 3
If after a terraform destroy you have manually deleted your AWS S3 terraform.tfstate file and then probably trying to spin up a new instance of all the tfstate declared resources, meaning you're working from scratch, you could just update your AWS S3 terrform.tfstate state backend key "s3" {key = "path/to/terraform.tfstate"} to a new one "s3" {key = "new-path/to/terraform.tfstate"}. Retry terraform init and validate that this should works well. This workaround has the limitation that you haven't really solved the root cause, you're just by-passing the problem using a new key for the S3 tfstate.
Encountered the same issue today, but despite Exequiel Barrierero's excellent advice there was no obvious error-- until I found it.
In our case we have an older Terragrunt stack (v0.23.30 / terraform v0.12.31), and one component module for CloudWatch was throwing this same error:
Error: state data in S3 does not have the expected content.
This may be caused by unusually long delays in S3 processing a
previous state update. Please wait for a minute or two and try again.
If this problem persists, and neither S3 nor DynamoDB are experiencing
an outage, you may need to manually verify the remote state and update
the Digest value stored in the DynamoDB table to the following value:
... no actual Digest value was supplied. However, I'd recently been refactoring out some of our component modules files to reduce complexity for our stack, and discovered a dangling data.terraform_remote_state for an element that no longer existed -- I'd merged the modules and there was no longer a remote state present for that particular data element.
Once I removed the invalid data.terraform_remote_state reference, both plan and apply completed without any hitches or errors.
This happened to me while I was trying to perform a "terraform init" from a new machine.
So I deleted the DynamoDB lock and tried most of the stuff from this thread, with no luck.
An important point while deleting the lock is that I saw multiple records there and remembered we are also using terraform workspaces. So the problem was fixed when I created and switched to the correct workspace in the new machine.
I imagine that the reason behind was that my terraform files had different resources than the state on S3 for the default workspace while running the init command. So I switched to the new resource:
terraform workspace new 'dev'
(or)
terraform workspace select 'dev'
(and then)
terraform init
And after that everything worked out smoothly again.
I was able to resolve this issue by updating the Digest value on DynamoDB with the one provided by the error:
Error refreshing state: state data in S3 does not have the expected
content.
This may be caused by unusually long delays in S3 processing a
previous state update. Please wait for a minute or two and try again.
If this problem persists, and neither S3 nor DynamoDB are experiencing
an outage, you may need to manually verify the remote state and update
the Digest value stored in the DynamoDB table to the following value:
8dcbb62b2ddf9b5daebd612fa524a7be
I have looked on the DynamoDB Item that contains the terraform.tfstate-md5 LockID, and replaced the value.
The problem is with the terraform state file. When you use s3 as backend for remote state handling, we get this error because there is a mismatch with the s3 file location and the dynamodb record
Try out the below steps.
1) Delete the dyanmodb record for that particular state entry.
2) Delete the state file in s3 location.
3) Now initialize and apply terraform from beginning.
This will create a new entry of state information in dynamodb and add the new state file in s3 and problem will be solved.
Happy coding...
You can also try to remove -md5 item from dynamodb
It can happen that there is a mismatch between the Digest field in the dynamodb table and the ETag of the state file on S3. For things to work they should be the same.
Just make sure the Digest on dynamo equals the Etag from the file on S3 by updating the dynamo entry manually.
This happened to me today. Here's my scenario:
I use terraform workspaces.
The backend is setup with S3 and dynamoDb.
For me, the problem occurred on a new machine.
On a new machine, you first have to do terraform init which in my case resulted in the below message
Successfully configured the backend "s3"! Terraform will automatically
use this backend unless the backend configuration changes.
Error refreshing state: state data in S3 does not have the expected content.
This may be caused by unusually long delays in S3 processing a previous state
update. Please wait for a minute or two and try again. If this problem
persists, and neither S3 nor DynamoDB are experiencing an outage, you may need
to manually verify the remote state and update the Digest value stored in the
DynamoDB table to the following value:
In the above message, it is actually suggesting that the digest value in dynamoDB should be blank. When I inspected dynamoDB, there was a digest value against one of the records. I removed it and it started working.
So, in my opinion, here's what happened in my case:
While experimenting with terraform, I was first trying things without workspaces and also once I introduced workspaces, I had done a few things in default workspace. Although I destroyed this infra but the digest value would have remained somehow in dynamoDB.
On a new machine, doing terraform init initializes the terraform backend and will also bring in the workspaces that you have. In this case it will be default and any other workspace that you may have created.
The default workspace is selected by default and since it is expected to be clean, the command failed on finding a digest value in dynamoDB.
Therefore, if you're dealing with the same problem, you can try this:
Check if there is a digest value against the lockId that does not have "env:" in it. That is the one that belongs to default.
Copy the digest somewhere in case what you're going to do next does not solve the problem and you would need to put it back.
Clear the digest value.
It happen because of when we delete the content of s3 bucket and process being processed not completed so for this Please wait for few minutes then try again.
I finished setup for the making Azure hub and installing Client Agent and Database .
Then define dataset.
That time whatever database i chose and click get latest schema, got the error.
Error is
The get schema request is either taking a long time or has failed.
When check log ,it said like below :
Getting schema information for the database failed with the exception "There is already an open DataReader associated with this Command which must be closed first.
For more information, provide
tracing id ‘xxxx’ to customer support.
Any idea for this?
the current release has maximum of 500 tables in sync group. also, the drop down for the tables list is restricted to this same limit.
here's a quick workaround:
script the tables you want to sync
create a new temporary database and run the script to create the tables you want to sync
register and add the new temporary database as a member of the sync group
use the new temporary database to pick the tables you want to sync
add all other databases that you want to sync with (on-premise databases and hub database)
once the provisioning is done, remove the temporary database from the sync group.