I am attempting to create new resources on GCP with a remote backend
After doing terraform init plan -out=tfplan and then terraform apply tfplan I get the following error:
Error: Inconsistent dependency lock file
│
│ The following dependency selections recorded in the lock file are
│ inconsistent with the configuration in the saved plan:
│ Terraform 0.13 and earlier allowed provider version constraints inside the
│ provider configuration block, but that is now deprecated and will be
│ removed in a future version of Terraform. To silence this warning, move the
│ provider version constraint into the required_providers block.
│
│ (and 22 more similar warnings elsewhere)
╵
│ - provider registry.terraform.io/hashicorp/aws: required by this configuration but no version is selected
│ - provider registry.terraform.io/hashicorp/external: required by this configuration but no version is selected
│ - provider registry.terraform.io/hashicorp/google-beta: required by this configuration but no version is selected
│ - provider registry.terraform.io/hashicorp/google: required by this configuration but no version is selected
│ - provider registry.terraform.io/hashicorp/local: required by this configuration but no version is selected
│ - provider registry.terraform.io/hashicorp/null: required by this configuration but no version is selected
│ - provider registry.terraform.io/hashicorp/random: required by this configuration but no version is selected
│
│ A saved plan can be applied only to the same configuration it was created
│ from. Create a new plan from the updated configuration.
╵
╷
│ Error: Inconsistent dependency lock file
│
│ The given plan file was created with a different set of external dependency
│ selections than the current configuration. A saved plan can be applied only
│ to the same configuration it was created from.
│
│ Create a new plan from the updated configuration.
On the other hand when I do it terraform init plan and apply -auto-approve its working with no issues
Part of the work of terraform init is to install all of the required providers into a temporary directory .terraform/providers so that other Terraform commands can run them. For entirely new providers, it will also update the dependency lock file to record which versions it selected so that future terraform init can guarantee to make the same decisions.
If you are running terraform apply tfplan in a different directory than where you ran terraform plan -out=tfplan then that local cache of the needed provider plugins won't be available and thus the apply will fail.
Separately from that, it also seems that when you ran terraform init prior to creating the plan, Terraform had to install some new providers that were not previously recorded in the dependency lock file, and so it updated the dependency lock file. However, when you ran terraform apply tfplan later those changes to the lock file were not visible and so Terraform reported that the current locks are inconsistent with what the plan was created from.
The Running Terraform in Automation guide has a section Plan and Apply on Different Machines which discusses some of the special concerns that come into play when you're trying to apply somewhere other than where you created the plan. However, I'll try to summarize the parts which seem most relevant to your situation, based on this error message.
Firstly, an up-to-date dependency lock file should be recorded in your version control system so that your automation is only reinstalling previously-selected providers and never making entirely new provider selections. That will then ensure that all of your runs use the same provider versions, and upgrades will always happen under your control.
You can make your automation detect this situation by adding the -lockfile=readonly option to terraform init, which makes that command fail if it would need to change the dependency lock file in order to perform its work:
terraform init -lockfile=readonly
If you see that fail in your automation, then the appropriate fix would be to run terraform init without -lockfile=readonly inside your development environment, and then check the updated lock file into your version control system.
If you cannot initialize the remote backend in your development environment, you can skip that step but still install the needed providers by adding -backend=false, like this:
terraform init -backend=false
Getting the same providers reinstalled again prior to the apply step is the other part of the problem here.
The guide I linked above suggests to achieve this by archiving up the entire working directory after planning as an artifact and then re-extracting it at the same path in the apply step. That is the most thorough solution, and in particular is what Terraform Cloud does in order to ensure that any other files created on disk during planning (such as using the archive_file data source from the hashicorp/archive provider) will survive into the apply phase.
However, if you know that your configuration itself doesn't modify the filesystem during planning (which is a best practice, where possible) then it can also be valid to just re-run terraform init -lockfile=readonly before running terraform apply tfplan, which will therefore reinstall the previously-selected providers, along with all of the other working directory initialization work that terraform init usually does.
As a final note, tangential to the rest of this, it seems like Terraform was also printing a warning about a deprecated language feature and on your system the warning output became interleaved with the error output, making the first message confusing because it includes a paragraph from the warning inside of it.
I believe the intended error message text, without the errant extra content from the warning, is as follows:
Error: Inconsistent dependency lock file
│
│ The following dependency selections recorded in the lock file are
│ inconsistent with the configuration in the saved plan:
│ - provider registry.terraform.io/hashicorp/aws: required by this configuration but no version is selected
│ - provider registry.terraform.io/hashicorp/external: required by this configuration but no version is selected
│ - provider registry.terraform.io/hashicorp/google-beta: required by this configuration but no version is selected
│ - provider registry.terraform.io/hashicorp/google: required by this configuration but no version is selected
│ - provider registry.terraform.io/hashicorp/local: required by this configuration but no version is selected
│ - provider registry.terraform.io/hashicorp/null: required by this configuration but no version is selected
│ - provider registry.terraform.io/hashicorp/random: required by this configuration but no version is selected
│
│ A saved plan can be applied only to the same configuration it was created
│ from. Create a new plan from the updated configuration.
╵
There could be various reasons but main reason is you might update your template and you might use some plugins in my case i updated my template and added random module solution to that problem is to upgrade lockfile with below command
terraform init -upgrade
it will install new modules and make lock file consistent automatically
When you do terraform init, it generates a file called .terraform.lock.hcl. Make sure you have this file in the directory where you are running terraform apply. If you are using CICD to run terraform, make sure the file is available too when running terraform apply.
For example, in a GitLab CICD pipeline, you can add these files in the cache:
cache:
paths:
- .terraform
- .terraform.lock.hcl
Related
My problem
I am having trouble defining the provider for my module.
Terraform fails to find the provider's plugin when I run terraform init and it shows the wrong provider for my module when I run terraform providers.
Setup
I am using Terraform version 1.3.7 on Debian 11.
Here's an example of what I am trying to do.
I have a main.tf where is my main configuration and modules. In this example I use a single module for creating a docker container.
.
├── main.tf
└── modules/
└── container_module/
└── main.tf
In the root module project/main.tf file, I define the provider and call the module:
terraform {
required_providers {
docker = {
source = "kreuzwerker/docker"
version = "3.0.1"
}
}
}
provider "docker" {
host = "unix:///var/run/docker.sock"
}
module "container" {
source = "./modules/container_module"
}
In the modules/container_module/main.tf, I create the docker container resource:
resource "docker_image" "debian" {
name = "debian:latest"
}
resource "docker_container" "foo" {
image = docker_image.debian.image_id
name = "foo"
}
What I expect to happen
When I run terraform init, it should download the provider's plugin from kreuzwerker/docker.
What actually happens
Instead, terraform downloads the plugin from kreuzwerker/docker once, then attempt to download it again from hashicorp/docker.
Here's the command's output:
terraform init
Initializing modules...
- container in modules/container_module
Initializing the backend...
Initializing provider plugins...
- Finding latest version of hashicorp/docker...
- Finding kreuzwerker/docker versions matching "3.0.1"...
- Installing kreuzwerker/docker v3.0.1...
- Installed kreuzwerker/docker v3.0.1 (self-signed, key ID BD080C4571C6104C)
Partner and community providers are signed by their developers.
If you'd like to know more about provider signing, you can read about it here:
https://www.terraform.io/docs/cli/plugins/signing.html
╷
│ Error: Failed to query available provider packages
│
│ Could not retrieve the list of available versions for provider hashicorp/docker: provider registry registry.terraform.io does not have a provider named
│ registry.terraform.io/hashicorp/docker
│
│ Did you intend to use kreuzwerker/docker? If so, you must specify that source address in each module which requires that provider. To see which modules are currently depending on
│ hashicorp/docker, run the following command:
│ terraform providers
╵
When I run terraform providers I get two different sources depending on the file:
terraform providers
Providers required by configuration:
.
├── provider[registry.terraform.io/kreuzwerker/docker] 3.0.1
└── module.container
└── provider[registry.terraform.io/hashicorp/docker]
According to the documentation, the child modules should inherit the provider from their parent:
Default Behavior: Inherit Default Providers:
If the child module does not declare any configuration aliases, the providers argument is optional. If you omit it, a child module inherits all of the default provider configurations from its parent module. (Default provider configurations are ones that don't use the alias argument.)
I have already checked this
Do terraform modules need required_providers?
This answer confirms the provider inheritence.
Terraform provider's resources not available in other tf files:
This question didn't help.
Terraform, providers miss inherits on module
This answer to this similar question says that I should add required_providers in the child module, but it is for an older version and it contradicts what I saw elsewhere.
I have the same issue when I create a providers.tf file in the root directory.
My question
How should I declare my provider so that the child module can inherit the provider from the root module?
kreuzwerker/docker is not a hashicorp provider. Thus as explained here, you have to explicitly define required_providers in each module, as such providers are not inherited.
I am trying to add the attribute "response_headers_policy" to the aws_cloudfront_distribution module. I have 3 environments: prod, stage, demo. Prod was the first created, followed by stage and demo a few months later. When adding that attribute to the staging and demo environments, there are no issues. However the plan fails with the following error when running for the prod environment:
Error: Unsupported argument
│
│ on ../../modules/<module>/cloudfront.tf line 47, in resource "aws_cloudfront_distribution" "this":
│ 47: response_headers_policy_id = "67f7725c-6f97-4210-82d7-5512b31e9d03" // SecurityHeadersPolicy ID
│
│ An argument named "response_headers_policy_id" is not expected here.
My assumption is that the state file expects an older version of the module for the production environment, but I am unsure how to resolve that issue. Especially in terraform cloud.
My first thought is that there is a mismatch in what version of the AWS provider you're using for your different environments. That argument was only added to the AWS provider in v3.64.0, in #21620.
I created AzDo repositories using the Terraform. But what is the best way to rename the repository without deleting it. I found a command "terraform state mv" for the purpose. Please let me know if there is a better way.
Currently when I use the Terraform state mv command I'm getting below error
terraform state mv "module.azdo-module.azuredevops_git_repository[\"repo1\"]" "module.azdo-module.azuredevops_project.azuredevops_git_repository[\"repo2\"]"
Getting below error
Error: Invalid address
│
│ on line 1:
│ (source code not available)
│
│ A resource name is required.
maybe
terraform state mv "module.azdo-module.azuredevops_git_repository.repo1" "module.azdo-module.azuredevops_project.azuredevops_git_repository.repo2"
list resources to check the name
terraform state list
Terraform has introduced a new declarative way to refactor resources with the move block syntax in 1.1 version.
Steps:
rename the resource to the new name
moved {
from = module.azdo-module.azuredevops_git_repository.repo1
to = module.azdo-module.azuredevops_project.azuredevops_git_repository.repo2
}
update references to a new resource name
plan and apply
remove the moved block (given no one else is using your modules)
More info:
https://developer.hashicorp.com/terraform/language/modules/develop/refactoring#moved-block-syntax
I'm using Terraform v1.0.0 and created a remote backend using AWS S3 and AWS DynamoDB as explained in Terraform Up & Running by Yevgeniy Brikman:
I wrote the code for the S3 bucket and the DynamoDB table and created the resources via terraform apply
I added terraform { backend "S3" {} } to my code
I created a backend.hcl file with all the relevant parameters
I moved my local state to S3 by invoking terraform init -backend-config=backend.hcl
Now I want to convert the remote state back to local state so I can safely delete the remote backend. Brikman explains to do this, one has to remove the backend configuration and invoke terraform init. When I try this, I see this:
$ terraform init
Initializing modules...
Initializing the backend...
╷
│ Error: Backend configuration changed
│
│ A change in the backend configuration has been detected, which may require migrating existing state.
│
│ If you wish to attempt automatic migration of the state, use "terraform init -migrate-state".
│ If you wish to store the current configuration with no changes to the state, use "terraform init -reconfigure".
╵
I figured the correct approach is to use -reconfigure which seems to work at first glance:
$ terraform init -reconfigure
Initializing modules...
Initializing the backend...
Initializing provider plugins...
- Reusing previous version of hashicorp/random from the dependency lock file
- Reusing previous version of hashicorp/aws from the dependency lock file
- Using previously-installed hashicorp/aws v3.47.0
- Using previously-installed hashicorp/random v3.1.0
Terraform has been successfully initialized!
However, executing terraform plan reveals that the initialization did not succeed:
$ terraform plan
╷
│ Error: Backend initialization required, please run "terraform init"
│
│ Reason: Unsetting the previously set backend "s3"
│
│ The "backend" is the interface that Terraform uses to store state,
│ perform operations, etc. If this message is showing up, it means that the
│ Terraform configuration you're using is using a custom configuration for
│ the Terraform backend.
│
│ Changes to backend configurations require reinitialization. This allows
│ Terraform to set up the new configuration, copy existing state, etc. Please run
│ "terraform init" with either the "-reconfigure" or "-migrate-state" flags to
│ use the current configuration.
│
│ If the change reason above is incorrect, please verify your configuration
│ hasn't changed and try again. At this point, no changes to your existing
│ configuration or state have been made.
╵
The only way to unset the backend seems to be via terraform init -migrate-state:
$ terraform init -migrate-state
Initializing modules...
Initializing the backend...
Terraform has detected you're unconfiguring your previously set "s3" backend.
Do you want to copy existing state to the new backend?
Pre-existing state was found while migrating the previous "s3" backend to the
newly configured "local" backend. No existing state was found in the newly
configured "local" backend. Do you want to copy this state to the new "local"
backend? Enter "yes" to copy and "no" to start with an empty state.
Enter a value: yes
Successfully unset the backend "s3". Terraform will now operate locally.
Is it not possible to convert the state via terraform init -reconfigure despite Terraform explicitly telling me so? If so, what does terraform init -reconfigure do exactly?
Below work around solved this problem for me.
Add below and run terraform init
terraform
{
backend "local"
{
}
}
From the official docs1, it seems -reconfigure is a bit destructive in the sense that it disregards the existing config. I would think that if you did changes to the backend, and then ran the command, it would only work from the assumption that this is a new config. I just recently read the docs myself, and I did not know that this was the behavior.
So, back to your question, I would assume -migrate-state is the desired option to use when migrating states between different backends. I understand from your issue that this was the case using terraform init -migrate-state?
As said by MindTooth, command init -migrate-state does exactly what you want to do.
It migrates the state unchanged when a different backend is configured.
init -reconfigure will initialise the new backend with a clean empty state.
Another way to do it is by pulling the state from s3 backend to a json file.
Then initialising an empty local backend using init -reconfigure, and pushing the state back in.
terraform state pull > state.json
terraform init -reconfigure
terraform state push state.json
I'm trying to upgrade from terraform 0.12 to 0.13.
it seems to have no specific problem of syntax when I run terraform 0.13upgrade nothing is changed.
only a file version.tf is added
+terraform {
+ required_providers {
+ aws = {
+ source = "hashicorp/aws"
+ }
+ }
+ required_version = ">= 0.13"
+}
and when I run terraform plan I got
Error: Could not load plugin
Plugin reinitialization required. Please run "terraform init".
Plugins are external binaries that Terraform uses to access and manipulate
resources. The configuration provided requires plugins which can't be located,
don't satisfy the version constraints, or are otherwise incompatible.
Terraform automatically discovers provider requirements from your
configuration, including providers used in child modules. To see the
requirements and constraints, run "terraform providers".
2 problems:
- Failed to instantiate provider "registry.terraform.io/-/aws" to obtain
schema: unknown provider "registry.terraform.io/-/aws"
- Failed to instantiate provider "registry.terraform.io/-/template" to obtain
schema: unknown provider "registry.terraform.io/-/template"
running terraform providers shows
Providers required by configuration:
.
├── provider[registry.terraform.io/hashicorp/aws]
├── module.bastion
│ ├── provider[registry.terraform.io/hashicorp/template]
│ └── provider[registry.terraform.io/hashicorp/aws]
└── module.vpc
└── provider[registry.terraform.io/hashicorp/aws] >= 2.68.*
Providers required by state:
provider[registry.terraform.io/-/aws]
provider[registry.terraform.io/-/template]
So my guess is form some reason I have -/aws instead of hashicorp/aws in my tfstate, however I can't find this specific string at all in the tfstate.
I tried:
running terraform init
terraform init -reconfigure
deleting the .terraform folder
deleting the ~/.terraform.d folder
So I'm running out of ideas on how to solve this problem
I followed the steps here
terraform state replace-provider registry.terraform.io/-/template registry.terraform.io/hashicorp/template
terraform state replace-provider registry.terraform.io/-/aws registry.terraform.io/hashicorp/aws
and it fixed my problem.