While applying terraform plan, we are getting error logs as highlighted in panic output box. It has been working very well on multiple terraform plan, apply and destroy.
And, we are unable to make a meaningful summary of this error. We have even searched in stackoverflow and github issues forums. Yet, none matched what we are facing. They even didn't closely match.
Terraform version: 1.1.8
Azure Provider Plugin Version: 3.3.0
terraform {
required providers {
azurerm = {
source = "hashicorp/azurerm"
version = "=3.3.0"
}
}
backend "azurerm" {
resource_group_name = "abcd"
storage_account_name = "tfstate30303"
container_name = "tfstate"
key = "terraform.tfstate"
access_key = "abcdsample....."
}
}
provider "azurerm" {
features {}
skip_provider_registration = true
}
Error:
Failed to load plugin schemas
Error while loading loading schemas for plugin components: Failed to obtain provider schema: Could not load the schema for provider registry.terraform.io/hashicorp/azurerm: Plugin did not respond. The plugin encountered an error, and failed to respond to the plugin. (*GRPCProvider).getProviderSchema call. The plugin logs might contain more details.
TF_LOG= TRACE Error log
[ERROR] plugin (*GRPCProvider).getProviderSchema: error="rpc error: code=Unavailable desc = connection error: desc = "transport:error while dialing: dial tcp 127.0.0.1:10000: connectex: A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond. [WARN]: plugin failed to exit gracefully
Note: Azure region is Central India. Was working fine till yesterday. And, we are working in an air-gapped environment. Hence, plugins are downloaded manually and placed in plugin-dir of code folder.
Please let me know if there is any mistake from my end. I am unable to make a meaning out of this error. never faced this error from terraform.
Thank you CK5 you get it works by simple rebooting the system. Posting this as solution with RCA why you were getting an error.
RCA : This is because you might earlier install other plugin for the same terraform file which you might deleted and now you are using with specific version of terraform plugin this might be possibilties to cause error while communicating with azure even though plugin is install.
So make sure to reboot your system and run the VS code editor as adminstiator if getting this kind of error so plugin sync properly to communitcate with azure.
Related
When such configuration is used:
module "key_vault" {
for_each = var.STAGES
resourceName = "kv-${each.value}-${local.resource_name_suffix}"
...
purgeProtectionEnabled = true
}
Then when for some reason secrets should be deleted. Restore gets error:
Error: keyvault.BaseClient#RecoverDeletedSecret: Failure responding to request: StatusCode=409 -- Original Error: autorest/azure: Service returned an error. Status=409 Code="Conflict" Message="Secret AzureAiApiKey is currently being deleted." InnerError={"code":"Object
IsBeingDeleted"}
So how to make RecoverDeletedSecret delay or trigger when delete is completed?
According go this github issue, this should already be fixed with version 2.49.
provider "azurerm" {
version = "~> 2.49.0"
}
Please always try to use the latest azurerm provider. I’ve seen many errors I couldn’t wrap my head around. And more often than not, things are already fixed in later versions of the provider.
I am new to golang and terraform plugin development. I was trying to generate plugin documentation by 'tfplugindocs' and I am still getting errors..
below is the output of 'tfplugindocs' execution.
my plugin is still under development and not registered with terraform
what can be the cause for this error..
% go run github.com/hashicorp/terraform-plugin-docs/cmd/tfplugindocs
rendering website for provider "hashicups"
exporting schema from Terraform
compiling provider "hashicups"
getting Terraform binary
running terraform init
getting provider schema
**Error executing command: unable to generate website:
Error: Could not load plugin
Plugin reinitialization required. Please run "terraform init".
Plugins are external binaries that Terraform uses to access and manipulate
resources. The configuration provided requires plugins which can't be
located,
don't satisfy the version constraints, or are otherwise incompatible.
Terraform automatically discovers provider requirements from your
configuration, including providers used in child modules. To see the
requirements and constraints, run "terraform providers".
failed to instantiate provider "registry.terraform.io/hashicorp/hashicups" to
obtain schema: Unrecognized remote plugin message: open : no such file or
directory
This usually means that the plugin is either invalid or simply
needs to be recompiled to support the latest protocol.**
exit status 1
I noticed that I started getting this error.
It appears to have started when I commented out the Address field in the providerserver.ServeOpts struct in main.go like this:
opts := providerserver.ServeOpts{
// TODO: Update this string with the published name of your provider.
// Address: "registry.terraform.io/davidalpert/myproject",
Debug: debug,
}
Once I uncommented that Address field I was able to run the build (which runs generate, which runs tfplugindocs) without error.
opts := providerserver.ServeOpts{
// TODO: Update this string with the published name of your provider.
Address: "registry.terraform.io/davidalpert/myproject",
Debug: debug,
}
My plugin is not published yet either, so that Address value is not valid, but that doesn't seem to matter so long as the Address field is set to override the default.
Environment
Terraform v0.12.24
+ provider.aws v2.61.0
Running in an alpine container.
Background
I have a basic terraform script running ok, but now I'm extending it and am trying to configure a remote (S3) state.
terraform.tf:
terraform {
backend "s3" {
bucket = "labs"
key = "com/company/labs"
region = "eu-west-2"
dynamodb_table = "labs-tf-locks"
encrypt = true
}
}
The bucket exists, and so does the table. I have created them both with terraform and have confirmed through the console.
Problem
When I run terraform init I get:
Error refreshing state: InvalidParameter: 2 validation error(s) found.
- minimum field size of 1, GetObjectInput.Bucket.
- minimum field size of 1, GetObjectInput.Key.
What I've tried
terraform fmt reports no errors and happily reformats my terraform.tf file. I tried moving the stanza into my main.tf too, just in case the terraform.tf file was being ignored for some reason. I got exactly the same results.
I've also tried running this without the alpine container, from an ubuntu ec2 instance in aws, but I get the same results.
I originally had the name of the terraform file in the key. I've removed that (thanks) but it hasn't helped resolve the problem.
Also, I've just tried running this in an older image: hashicorp/terraform:0.12.17 but I get a similar error:
Error: Failed to get existing workspaces: InvalidParameter: 1 validation error(s) found.
- minimum field size of 1, ListObjectsInput.Bucket.
I'm guessing that I've done something trivially stupid here, but I can't see what it is.
Solved!!!
I don't understand the problem, but I have a working solution now. I deleted the .terraform directory and reran terraform init. This is ok for me because I don't have an existing state. The insight came from reading the error from the 0.12.17 version of terraform, which complained about not being able to read the workspace.
Error: Failed to get existing workspaces: InvalidParameter: 1 validation error(s) found.
- minimum field size of 1, ListObjectsInput.Bucket.
Which initially led me to believe there was a problem with an earlier version of tf reading a newer version's configuration. So, I blew away the .terraform and it worked with the older tf, so I did it again and it worked with the newer tf too. Obviously, something had gotten itself screwed up in terraform's storage. I don't know how or why. But, it works for me, so...
If you are facing this issue in app side, there is a chance that you are sending a wrong payload or the payload is updated by the backend.
Before i was doing this->
--> POST .../register
{"email":"FamilyKalwar#gmail.com","user":{"password":"123456#aA","username":"familykalwar"}}
--> END POST (92-byte body)
<-- 500 Internal Server Error .../register (282ms)
{"message":"InvalidParameter: 2 validation error(s) found.\n- minimum field size of 6, SignUpInput.Password.\n- minimum field size of 1, SignUpInput.Username.\n"}
Later I found that the payload is updated to this->*
{
"email": "tanishat1#gmail.com",
"username": "tanishat1",
"password": "123456#aA"
}
I removed the "user" data class and updated the payload and it worked!
I am getting this error when applying or planning terraform
Error: Failed to instantiate provider "aws" to obtain schema: timeout while waiting for plugin to start
Terraform init works.
Last time I was able to fix it by restarting my computer but now, it didn't work.
I tried very simple code to test but didn't work.
I had the same problem and I could get a solution adding a new rule on the security group that allow connect to my eks on AWS. In my case my IP change and I forgot adding for this reason I got the error.
I am trying to declare the following Terraform provider:
provider "mysql" {
endpoint = "${aws_db_instance.main.endpoint}:3306"
username = "root"
password = "root"
}
I get the following error:
Error refreshing state: 1 error(s) occurred:
* dial tcp: lookup ${aws_db_instance.main.endpoint}: invalid domain name
It seems that Terraform is not performing interpolation on my endpoint string, yet I don't see anything in the documentation about this -- what gives?
Yes, it does. There's an example in the docs at https://www.terraform.io/docs/providers/mysql/
# Configure the MySQL provider based on the outcome of
# creating the aws_db_instance.
provider "mysql" {
endpoint = "${aws_db_instance.default.endpoint}"
username = "${aws_db_instance.default.username}"
password = "${aws_db_instance.default.password}"
}
I ran into a similar set of error messages ("connect failed," "invalid domain lookup") and looked into this a bit. I hope this helps you or someone else working across cloud and database providers in Terraform.
This seems to come down to the MySQL provider attempting to establish a database connection as soon as it's initialized, which could be a problem if you're trying to build a database server and configure the database / grants on it as part of the same Terraform run. Providers get initialized based on Terraform finding a resource owned by that provider in your Terraform code, and since this connection attempt happens when the provider gets initialized, you can't work around this with -target=<SPECIFIC RESOURCE>.
The workarounds I can think of would be to have a codebase for setting up the database server and a different codebase for setting up the database grants and suchlike ... or to have Terraform kick off a script that does that work for you (with dynamic parameters, of course!). Either way, you're effectively removing mysql_* resources from your initial Terraform run and that's what fixes this.
There are a couple of code changes that probably need to happen here - the Terraform MySQL provider would need to delay connecting to the database until Terraform tells it to run an operation on a resource, and it may be necessary to look at how Terraform handles dependencies across providers. I tried hacking in deferred connection logic just for the mysql_database resource to see if that solved all my problems and Terraform still complained about a dependency loop in the graph.
You can track the MySQL provider issue here:
https://github.com/terraform-providers/terraform-provider-mysql/issues/2
And the comments from before providers were split into their own releasable codebases:
https://github.com/hashicorp/terraform/issues/5687