Terraform- provider configuration not found - terraform

How to resolve below error
Error: Provider configuration not present
To work with module.my_ec2.aws_instance.web[0] (orphan) its original provider
configuration at module.my_ec2.provider["registry.terraform.io/hashicorp/aws"]
is required, but it has been removed. This occurs when a provider
configuration is removed while objects created by that provider still exist in
the state. Re-add the provider configuration to destroy
module.my_ec2.aws_instance.web[0] (orphan), after which you can remove the
provider configuration again.
Releasing state lock. This may take a few moments...

Try adding something like this:
provider "aws" {
version = "3.10.0"
region = "eu-west-1"
profile = "default"
}
Then run terraform init
and try running a plan again.

Related

How to fix Provider configuration not present error in Terraform plan

I have moved Terraform configuration from one Git repo to other.
Then I ran Terraform init and it completed successfully.
When I run Terraform plan, I find below issue.
Terraform plan
╷
│ Error: Provider configuration not present
│
│ To work with data.aws_acm_certificate.cloudfront_wildcard_product_env_certificate its original provider
│ configuration at provider["registry.terraform.io/hashicorp/aws"].cloudfront-acm-us-east-1 is required, but it
│ has been removed. This occurs when a provider configuration is removed while objects created by that provider
│ still exist in the state. Re-add the provider configuration to destroy
│ data.aws_acm_certificate.cloudfront_wildcard_product_env_certificate, after which you can remove the provider
│ configuration again.
The data resource looks like this,
data "aws_acm_certificate" "cloudfront_wildcard_product_env_certificate" {
provider = aws.cloudfront-acm-us-east-1
domain = "*.${var.product}.${var.environment}.xyz.com"
statuses = ["ISSUED"]
}
After further research I found that by removing below line, it works as expected.
provider = aws.cloudfront-acm-us-east-1
Not sure what is the reason.
It appears that you were using a multi-provider configuration in the former repo. I.e. you were probably using one provider block like
provider "aws" {
region = "some-region"
access_key = "..."
secret_key = "..."
}
and a second like
provider "aws" {
alias = "cloudfront-acm-us-east-1"
region = "us-east-1"
access_key = "..."
secret_key = "..."
}
Such a setup can be used if you need to create or access resources in multiple regions or multiple accounts.
Terraform will use the first provider by default to create resources (or to lookup in case of data sources) if there is no provider specified in the resource block or data source block.
With the provider argument in
data "aws_acm_certificate" "cloudfront_wildcard_product_env_certificate" {
provider = aws.cloudfront-acm-us-east-1
domain = "*.${var.product}.${var.environment}.xyz.com"
statuses = ["ISSUED"]
}
you tell Terraform to use a specific provider.
I assume you did not move the second provider config to the new repo, but you still tell Terraform to use a specific provider which is not there. By removing the provider argument, Terraform will use the default provider for aws.
Further possible reason for this error message
Just for completeness:
The same error message can appear also in a slightly different setting, where you have a multi-provider config with resources created via the second provider. If you now remove the resource config of these resources from the Terraform config and at the same time remove the specific provider config, then Terraform will not be able to destroy the resources via the specific provider and thus show the error message like in your post.
Literally, the error message indicates this second setting, but it does not fit exactly to your problem description.

Hashicorp Vault Required Provider Configuration in Terraform

My GitLab CI pipeline terraform configuration requires a couple of required_provider blocks to be declared. These are "hashicorp/azuread" and "hashicorp/vault" and so in my provider.tf file, I have given the below declaration:
terraform {
required_providers {
azuread = {
source = "hashicorp/azuread"
version = "~> 2.0.0"
}
vault = {
source = "hashicorp/vault"
version = "~> 3.0.0"
}
}
}
When my GitLab pipeline runs the terraform plan stage however, it throws the following error:
Error: Invalid provider configuration
Provider "registry.terraform.io/hashicorp/vault" requires explicit configuraton.
Add a provider block to the root module and configure the providers required
arguments as described in the provider documentation.
I realise my required provider block for hashicorp/vault is incomplete/not properly configured but despite all my efforts to find an example of how it should be configured, I have simply run into a brick wall.
Any help with a very basic example would be greatly appreciated.
It depends on the version of Terraform you are using. However, for each provider there is (in the top right corner) a Use Provider button which explains how to add the required blocks of code to your files.
Each provider has some additional configuration parameters which could be added and some are required.
So based on the error, I think you are missing the second part of the configuration:
provider "vault" {
# Configuration options
}
There is also an explanation on how to upgrade to version 3.0 of the provider. You might also want to take a look at Hashicorp Learn examples and Github repo with example code.

Azure :: Terraform fails on azure keyvault secrets

I am noticing this wierd error since I moved whole of my code from 1.42.0 provider version to 2.19.0. I am creating several keyvault secrets. Earlier it when I try to execute a plan after appplying once, it used to refresh the state and proceed, but now after updating the provider version, I am noticing the below error.
Error: A resource with the ID "https://mytestingvault.vault.azure.net/secrets/hub-access/060e71ecd1084cb5a6a496f77a2aea5c" already exists - to be managed via Terraform this resource needs to be imported into the State. Please see the resource documentation for "azurerm_key_vault_secret" for more information.
Additionally I have added lifecycle ignore changes to see if it could skip reading the vault secret changes but unfortunately the same error is shown. Only way to get rid of the error is to delete the secret. What am i wrong here?
lifecycle {
ignore_changes = [
value,name
]
}
The Terraform key vault documentation says:
Terraform will automatically recover a soft-deleted Key Vault during
Creation if one is found - you can opt out of this using the features
block within the Provider block.
You should configure your Terraform to stop recovering the softly deleted Key Vault like this:
provider "azurerm" {
features {
key_vault {
recover_soft_deleted_key_vaults = false
}
}
}
If you want Terraform to purge any softly deleted Key Vaults when using terraform destroy use this additional line:
provider "azurerm" {
features {
key_vault {
purge_soft_delete_on_destroy = true
recover_soft_deleted_key_vaults = false
}
}
}
You probably need to read up on the general topic of Terraform state management. This is not specific to your Key Vault secret. Some resource (your secret) exists that was not created by the terraform workspace that you are just executing. TF does not like that. So you either need to import this pre-existing resource into the state of this workspace, or delete it so that it can be re-created (and thereby managed)
The issue was that keyvault even though was deleted, we had to purge it via cli using aws cli purge.

Terraform state replace-provider update state with wrong data

We upgraded terraform version and we have a problem with terraform remote state. Basiacaly I run this command to update azurerm provider:
terraform state replace-provider 'registry.terraform.io/-/azurerm' 'registry.terraform.io/hashicorp/azurerm'
Right now when I run plan command it shows me some errors. All are the same but resource if different. For example:
To work with module.name.module.lb_name.azurerm_lb_probe.instance
its original provider configuration at
provider["registry.terraform.io/-/azurerm"] is required, but it has been
removed. This occurs when a provider configuration is removed while objects
created by that provider still exist in the state. Re-add the provider
configuration to destroy
module.name.module.lb_name.azurerm_lb_probe.instance, after which
you can remove the provider configuration again.
Basically the state was updated and the provider looks like this:
"provider": "provider.azurerm"
but it should look like this:
"provider": "provider[\"registry.terraform.io/hashicorp/azurerm\"]"
Is there any way to update it via terraform commands or the only way to fix it is to edit state file manually?
When you replace the providers for the Terraform upgrade with the command:
terraform state replace-provider 'registry.terraform.io/-/azurerm' 'registry.terraform.io/hashicorp/azurerm'
OK, there is no problem. And then you can use the command below to check the current providers:
terraform providers
The screenshot will show like this:
At this time, the providers are the same as the requirement. Then you need to init again to pull the current providers to replace the existing ones with the command below:
terraform init
This is the step you have missed.

How to run "terraform state mv" commands in the Terraform Enterprise/Cloud?

I'm in the process of a Terraform code refactoring in which some resources are moved to a module A and a module B into a submodule of A and I'm now getting this error in Terraform Enterprise:
Error: Provider configuration not present
To work with
module.account-baseline.module.iam-policy.aws_iam_role.ops_role
its original provider configuration at module.account-baseline.provider.aws is
required, but it has been removed. This occurs when a provider configuration
is removed while objects created by that provider still exist in the state.
Re-add the provider configuration to destroy
module.account-baseline.module.iam-policy.aws_iam_role.ops_role,
after which you can remove the provider configuration again.
I've tried in my playground account using a local Terraform state to run "terraform state mv" commands moving the module into a sub-module and it works, but I don't know how to apply this state change to Terraform Enterprise.
Any help would be more than welcome, thanks in advance!

Resources