I have a Terraform module that manages Snowflake resources. When the Circle CI tries to apply it, I often get:
Error: Failed to save state
Error saving state: Error uploading state: Conflict
This workspace is locked by Run
run-[ ID ] and will only accept state uploads from
them while locked.
and
Error: Failed to persist state to backend
The error shown above has prevented Terraform from writing the updated state to the configured backend. To allow for recovery, the state has been written to the file "errored.tfstate" in the current working directory.
Running "terraform apply" again at this point will create a forked state, making it harder to recover.
To retry writing this state, use the following command:
terraform state push errored.tfstate
However, I am able to run terraform apply from local machine without any issues - only the CI seems to have this problem.
What might cause this error and how can I fixed?
Related
I am new to golang and terraform plugin development. I was trying to generate plugin documentation by 'tfplugindocs' and I am still getting errors..
below is the output of 'tfplugindocs' execution.
my plugin is still under development and not registered with terraform
what can be the cause for this error..
% go run github.com/hashicorp/terraform-plugin-docs/cmd/tfplugindocs
rendering website for provider "hashicups"
exporting schema from Terraform
compiling provider "hashicups"
getting Terraform binary
running terraform init
getting provider schema
**Error executing command: unable to generate website:
Error: Could not load plugin
Plugin reinitialization required. Please run "terraform init".
Plugins are external binaries that Terraform uses to access and manipulate
resources. The configuration provided requires plugins which can't be
located,
don't satisfy the version constraints, or are otherwise incompatible.
Terraform automatically discovers provider requirements from your
configuration, including providers used in child modules. To see the
requirements and constraints, run "terraform providers".
failed to instantiate provider "registry.terraform.io/hashicorp/hashicups" to
obtain schema: Unrecognized remote plugin message: open : no such file or
directory
This usually means that the plugin is either invalid or simply
needs to be recompiled to support the latest protocol.**
exit status 1
I noticed that I started getting this error.
It appears to have started when I commented out the Address field in the providerserver.ServeOpts struct in main.go like this:
opts := providerserver.ServeOpts{
// TODO: Update this string with the published name of your provider.
// Address: "registry.terraform.io/davidalpert/myproject",
Debug: debug,
}
Once I uncommented that Address field I was able to run the build (which runs generate, which runs tfplugindocs) without error.
opts := providerserver.ServeOpts{
// TODO: Update this string with the published name of your provider.
Address: "registry.terraform.io/davidalpert/myproject",
Debug: debug,
}
My plugin is not published yet either, so that Address value is not valid, but that doesn't seem to matter so long as the Address field is set to override the default.
I am experiencing a weird behaviour with terraform. I have been working on an infra. I have a backend state configured to state my state file in a storage account in azure. Until yesterday everything was fine, this morning when I tried to update my infra, the output from terraform plan was weird as its trying to create all the resources as new, when I checked my local testate..it was empty.
I tried terraform pull and terraform refresh but nothing, still same result. I checked my remote state and I have all the resources still declared.
So I went for plan b, copy and paste my remote state into my local project and run terraform once again, but nothing, seems that terraform is ignoring my terraform state on my local and doesn't wanna pull the remote one.
EDIT:
this is the structure of my terraform backend:
terraform {
backend "azurerm" {
resource_group_name = "<resource-group-name>"
storage_account_name = "<storage-name>"
container_name = "<container-name>"
key = "terraform.tfstate"
}
}
The weird thing also, is that I just used terraform to create 8 resource for another project, and it did created everything and updated my backend state without any issue. The problem is only with the old resources.
Any help please?
if you run terraform workspace show are you in the default workspace?
if you have the tfstate locally but you're not on the correct workspace terraform will ignore it : https://www.terraform.io/docs/language/state/workspaces.html#using-workspaces
also is it possible to see your backend file structure?
EDIT:
i dont know why it ignores your remote state, but i think that your problem is that when you run terraform refresh it ignores your local file because you have a remote config:
Usage: terraform refresh [options]
-state=path - Path to read and write the state file to. Defaults to "terraform.tfstate". Ignored when remote state is used.
-state-out=path - Path to write updated state file. By default, the -state path will be used. Ignored when remote state is used.
is it possible to see the ouput of your terraform state pull?
I often see errors like this when updating a configuration with terraform apply:
Error: error updating Batch Compute Environment (dev-james-data-ingest): Manually scaling down compute environment is not supported. Disconnecting job queues from compute environment will cause it to scale-down to minvCpus.
status code: 400, request id: ed1a562b-dfc8-4e3f-af9b-e3efba4dbfe7
My experience has been that this seems to be an innocuous error that can be safely ignored. (But is this actually the case?)
I'm writing a Python script that wraps Terraform commands, calling the terraform executable via the sh module. Under the above assumption about the harmlessness of this error, I catch the exception raised by the call to sh.terraform and if the stderr contains the known error message related to this error then I swallow it, otherwise, I raise the exception:
try:
sh.terraform("apply", "-auto-approve", plan_file)
except sh.ErrorReturnCode as err_code:
if (
"Manually scaling down compute environment is not supported. "
"Disconnecting job queues from compute environment will cause it to scale-down to minvCpus."
not in str(err_code.stderr)
):
raise
Am I detecting this error correctly or is there a way to capture more specific info from Terraform and use that to determine if the issue is the compute environment scale down issue or not? Or is this compute environment scale down problem really a bonafide error and the terraform apply command should be considered in error overall?
Environment
Terraform v0.12.24
+ provider.aws v2.61.0
Running in an alpine container.
Background
I have a basic terraform script running ok, but now I'm extending it and am trying to configure a remote (S3) state.
terraform.tf:
terraform {
backend "s3" {
bucket = "labs"
key = "com/company/labs"
region = "eu-west-2"
dynamodb_table = "labs-tf-locks"
encrypt = true
}
}
The bucket exists, and so does the table. I have created them both with terraform and have confirmed through the console.
Problem
When I run terraform init I get:
Error refreshing state: InvalidParameter: 2 validation error(s) found.
- minimum field size of 1, GetObjectInput.Bucket.
- minimum field size of 1, GetObjectInput.Key.
What I've tried
terraform fmt reports no errors and happily reformats my terraform.tf file. I tried moving the stanza into my main.tf too, just in case the terraform.tf file was being ignored for some reason. I got exactly the same results.
I've also tried running this without the alpine container, from an ubuntu ec2 instance in aws, but I get the same results.
I originally had the name of the terraform file in the key. I've removed that (thanks) but it hasn't helped resolve the problem.
Also, I've just tried running this in an older image: hashicorp/terraform:0.12.17 but I get a similar error:
Error: Failed to get existing workspaces: InvalidParameter: 1 validation error(s) found.
- minimum field size of 1, ListObjectsInput.Bucket.
I'm guessing that I've done something trivially stupid here, but I can't see what it is.
Solved!!!
I don't understand the problem, but I have a working solution now. I deleted the .terraform directory and reran terraform init. This is ok for me because I don't have an existing state. The insight came from reading the error from the 0.12.17 version of terraform, which complained about not being able to read the workspace.
Error: Failed to get existing workspaces: InvalidParameter: 1 validation error(s) found.
- minimum field size of 1, ListObjectsInput.Bucket.
Which initially led me to believe there was a problem with an earlier version of tf reading a newer version's configuration. So, I blew away the .terraform and it worked with the older tf, so I did it again and it worked with the newer tf too. Obviously, something had gotten itself screwed up in terraform's storage. I don't know how or why. But, it works for me, so...
If you are facing this issue in app side, there is a chance that you are sending a wrong payload or the payload is updated by the backend.
Before i was doing this->
--> POST .../register
{"email":"FamilyKalwar#gmail.com","user":{"password":"123456#aA","username":"familykalwar"}}
--> END POST (92-byte body)
<-- 500 Internal Server Error .../register (282ms)
{"message":"InvalidParameter: 2 validation error(s) found.\n- minimum field size of 6, SignUpInput.Password.\n- minimum field size of 1, SignUpInput.Username.\n"}
Later I found that the payload is updated to this->*
{
"email": "tanishat1#gmail.com",
"username": "tanishat1",
"password": "123456#aA"
}
I removed the "user" data class and updated the payload and it worked!
I am getting this error when applying or planning terraform
Error: Failed to instantiate provider "aws" to obtain schema: timeout while waiting for plugin to start
Terraform init works.
Last time I was able to fix it by restarting my computer but now, it didn't work.
I tried very simple code to test but didn't work.
I had the same problem and I could get a solution adding a new rule on the security group that allow connect to my eks on AWS. In my case my IP change and I forgot adding for this reason I got the error.