terraform init not working when specifying modules - terraform

I am new to terraform and trying to fix a small issue which I am facing when testing modules.
Below is the folder structure I have in my local computer.
I have below code at storage folder level
#-------storage/main.tf
provider "aws" {
region = "us-east-1"
}
resource "aws_s3_bucket" "my-first-terraform-bucket" {
bucket = "first-terraform-bucket"
acl = "private"
force_destroy = true
}
And below snippet from main_code level referencing storage module
#-------main_code/main.tf
module "storage" {
source = "../storage"
}
When I am issuing terraform init / plan / apply from storage folder it works absolutely fine and terraform creates the s3 bucket.
But when I am trying the same from main_code folder I am getting the below error -
main_code#DFW11-8041WL3: terraform init
Initializing modules...
- module.storage
Error downloading modules: Error loading modules: module storage: No Terraform configuration files found in directory: .terraform/modules/0d1a7f4efdea90caaf99886fa2f65e95
I have read many issue boards on stack overflow and other github issue forums but did not help resolving this. Not sure what I am missing!

Just update the existing modules by running terraform get --update. If this not work delete the .terraform folder.

I agree the comments from #rclement.
Several ways to troubleshooting terraform issues.
Clean .terraform folder and rerun terraform init.
This is always the first choice. But it takes time when you run terraform init next time, it starts installing all providers and modules again.
If you don't want to clean .terraform to save the deployment time, you can run terraform get --update=true
Most case is, you did some changes in modules, and it need be refreshed.

I had a similar issue but the problem for me was, The module I have created was looking for the providers.tf so had to add it for the modules as well and it worked.
├── main.tf
├── modules
│   └── droplets
│   ├── main.tf
│   ├── providers.tf
│   └── variables.tf
└── variables.tf
So my providers was present in the root locations previous which modules could not use so the issue for me.

Related

Terraform child module does not inherit provider from root module

My problem
I am having trouble defining the provider for my module.
Terraform fails to find the provider's plugin when I run terraform init and it shows the wrong provider for my module when I run terraform providers.
Setup
I am using Terraform version 1.3.7 on Debian 11.
Here's an example of what I am trying to do.
I have a main.tf where is my main configuration and modules. In this example I use a single module for creating a docker container.
.
├── main.tf
└── modules/
└── container_module/
└── main.tf
In the root module project/main.tf file, I define the provider and call the module:
terraform {
required_providers {
docker = {
source = "kreuzwerker/docker"
version = "3.0.1"
}
}
}
provider "docker" {
host = "unix:///var/run/docker.sock"
}
module "container" {
source = "./modules/container_module"
}
In the modules/container_module/main.tf, I create the docker container resource:
resource "docker_image" "debian" {
name = "debian:latest"
}
resource "docker_container" "foo" {
image = docker_image.debian.image_id
name = "foo"
}
What I expect to happen
When I run terraform init, it should download the provider's plugin from kreuzwerker/docker.
What actually happens
Instead, terraform downloads the plugin from kreuzwerker/docker once, then attempt to download it again from hashicorp/docker.
Here's the command's output:
terraform init
Initializing modules...
- container in modules/container_module
Initializing the backend...
Initializing provider plugins...
- Finding latest version of hashicorp/docker...
- Finding kreuzwerker/docker versions matching "3.0.1"...
- Installing kreuzwerker/docker v3.0.1...
- Installed kreuzwerker/docker v3.0.1 (self-signed, key ID BD080C4571C6104C)
Partner and community providers are signed by their developers.
If you'd like to know more about provider signing, you can read about it here:
https://www.terraform.io/docs/cli/plugins/signing.html
╷
│ Error: Failed to query available provider packages
│
│ Could not retrieve the list of available versions for provider hashicorp/docker: provider registry registry.terraform.io does not have a provider named
│ registry.terraform.io/hashicorp/docker
│
│ Did you intend to use kreuzwerker/docker? If so, you must specify that source address in each module which requires that provider. To see which modules are currently depending on
│ hashicorp/docker, run the following command:
│ terraform providers
╵
When I run terraform providers I get two different sources depending on the file:
terraform providers
Providers required by configuration:
.
├── provider[registry.terraform.io/kreuzwerker/docker] 3.0.1
└── module.container
└── provider[registry.terraform.io/hashicorp/docker]
According to the documentation, the child modules should inherit the provider from their parent:
Default Behavior: Inherit Default Providers:
If the child module does not declare any configuration aliases, the providers argument is optional. If you omit it, a child module inherits all of the default provider configurations from its parent module. (Default provider configurations are ones that don't use the alias argument.)
I have already checked this
Do terraform modules need required_providers?
This answer confirms the provider inheritence.
Terraform provider's resources not available in other tf files:
This question didn't help.
Terraform, providers miss inherits on module
This answer to this similar question says that I should add required_providers in the child module, but it is for an older version and it contradicts what I saw elsewhere.
I have the same issue when I create a providers.tf file in the root directory.
My question
How should I declare my provider so that the child module can inherit the provider from the root module?
kreuzwerker/docker is not a hashicorp provider. Thus as explained here, you have to explicitly define required_providers in each module, as such providers are not inherited.

Terragrunt provider.tf generation not generating files

The expectation is that when running a terragrunt run-all apply from the root directory, a provider.tf file will be created in the sub directories. I've verified my backend is able to talk to my azure storage account and it will create a terraform.tfstate file there. I expect a provider.tf file to appear under each "service#" folder. I'm extremely new to terragrunt. This is my first exercise with it. I am not actually trying to deploy any terraform resources. Just have the provider.tf file created in my sub directories. TF version is 1.1.5. Terragrunt version is 0.36.1.
my folder structure
tfpractice
├───terragrunt.hcl
├───environment_vars.yaml
├───dev
│ ├───service1
│ ├───terragrunt.hcl
│ └───service2
│ ├───terragrunt.hcl
└───prod
├───service1
│ ├───terragrunt.hcl
└───service2
│ ├───terragrunt.hcl
root terragrunt.hcl config
# Generate provider configuration for all child directories
generate "provider" {
path = "provider.tf"
if_exists = "overwrite"
contents = <<EOF
terraform {
required_providers {
azurerm = {
source = "hashicorp/azurerm"
version = "2.94.0"
}
}
backend "azurerm" {}
}
provider "azurerm" {
features {}
}
EOF
}
# Remote backend settings for all child directories
remote_state {
backend = "azurerm"
config = {
resource_group_name = local.env_vars.resource_group_name
storage_account_name = local.env_vars.storage_account_name
container_name = local.env_vars.container_name
key = "${path_relative_to_include()}/terraform.tfstate"
}
}
# Collect values from environment_vars.yaml and set as local variables
locals {
env_vars = yamldecode(file("environment_vars.yaml"))
}
environment_vars.yaml config
resource_group_name: "my-tf-test"
storage_account_name: "mystorage"
container_name: "tfstate"
terragrunt.hcl config in the service# folders
# Collect values from parent environment_vars.yaml file and set as local variables
locals {
env_vars = yamldecode(file(find_in_parent_folders("environment_vars.yaml")))
}
# Include all settings from the root terragrunt.hcl file
include {
path = find_in_parent_folders()
}
When i run the terragrunt run-all apply this is the output
Are you sure you want to run 'terragrunt apply' in each folder of the stack described above? (y/n) y
Initializing the backend...
Initializing the backend...
Initializing the backend...
Initializing the backend...
Initializing provider plugins...
Terraform has been successfully initialized!
You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.
If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.
Initializing provider plugins...
Initializing provider plugins...
Terraform has been successfully initialized!
You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.
If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.
Terraform has been successfully initialized!
You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.
If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.
Initializing provider plugins...
Terraform has been successfully initialized!
You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.
If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.
No changes. Your infrastructure matches the configuration.
No changes. Your infrastructure matches the configuration.
Terraform has compared your real infrastructure against your configuration
and found no differences, so no changes are needed.
No changes. Your infrastructure matches the configuration.
Terraform has compared your real infrastructure against your configuration
and found no differences, so no changes are needed.
No changes. Your infrastructure matches the configuration.
Terraform has compared your real infrastructure against your configuration
and found no differences, so no changes are needed.
Terraform has compared your real infrastructure against your configuration
and found no differences, so no changes are needed.
Apply complete! Resources: 0 added, 0 changed, 0 destroyed.
Apply complete! Resources: 0 added, 0 changed, 0 destroyed.
Apply complete! Resources: 0 added, 0 changed, 0 destroyed.
Apply complete! Resources: 0 added, 0 changed, 0 destroyed.
It looks successful, however NO provider.tf files show up in ANY directory. Not even root. It just creates a terraform.tfstate file under the service# directories.
But...if I run a terragrunt init from the root directory, it will create the provider.tf file as expected in the root directory. This does NOT work in the service# directories although the terragrunt init is successful.
What am I missing? This is the most basic terragrunt use case, but the examples lead me to believe this should just work.
I got it to work, but the terragrunt run-all apply command doesn't work at all. Instead, at the root I have to run terragrunt apply. If you don't at the root, then all sub folders get grouped together rather than in a dev/prod sub folder. Then I have to go to every sub folder and run this again. It's the only way i've gotten it to work.

Inconsistent dependency when i do terraform apply from plan -out=file

I am attempting to create new resources on GCP with a remote backend
After doing terraform init plan -out=tfplan and then terraform apply tfplan I get the following error:
Error: Inconsistent dependency lock file
│
│ The following dependency selections recorded in the lock file are
│ inconsistent with the configuration in the saved plan:
│ Terraform 0.13 and earlier allowed provider version constraints inside the
│ provider configuration block, but that is now deprecated and will be
│ removed in a future version of Terraform. To silence this warning, move the
│ provider version constraint into the required_providers block.
│
│ (and 22 more similar warnings elsewhere)
╵
│ - provider registry.terraform.io/hashicorp/aws: required by this configuration but no version is selected
│ - provider registry.terraform.io/hashicorp/external: required by this configuration but no version is selected
│ - provider registry.terraform.io/hashicorp/google-beta: required by this configuration but no version is selected
│ - provider registry.terraform.io/hashicorp/google: required by this configuration but no version is selected
│ - provider registry.terraform.io/hashicorp/local: required by this configuration but no version is selected
│ - provider registry.terraform.io/hashicorp/null: required by this configuration but no version is selected
│ - provider registry.terraform.io/hashicorp/random: required by this configuration but no version is selected
│
│ A saved plan can be applied only to the same configuration it was created
│ from. Create a new plan from the updated configuration.
╵
╷
│ Error: Inconsistent dependency lock file
│
│ The given plan file was created with a different set of external dependency
│ selections than the current configuration. A saved plan can be applied only
│ to the same configuration it was created from.
│
│ Create a new plan from the updated configuration.
On the other hand when I do it terraform init plan and apply -auto-approve its working with no issues
Part of the work of terraform init is to install all of the required providers into a temporary directory .terraform/providers so that other Terraform commands can run them. For entirely new providers, it will also update the dependency lock file to record which versions it selected so that future terraform init can guarantee to make the same decisions.
If you are running terraform apply tfplan in a different directory than where you ran terraform plan -out=tfplan then that local cache of the needed provider plugins won't be available and thus the apply will fail.
Separately from that, it also seems that when you ran terraform init prior to creating the plan, Terraform had to install some new providers that were not previously recorded in the dependency lock file, and so it updated the dependency lock file. However, when you ran terraform apply tfplan later those changes to the lock file were not visible and so Terraform reported that the current locks are inconsistent with what the plan was created from.
The Running Terraform in Automation guide has a section Plan and Apply on Different Machines which discusses some of the special concerns that come into play when you're trying to apply somewhere other than where you created the plan. However, I'll try to summarize the parts which seem most relevant to your situation, based on this error message.
Firstly, an up-to-date dependency lock file should be recorded in your version control system so that your automation is only reinstalling previously-selected providers and never making entirely new provider selections. That will then ensure that all of your runs use the same provider versions, and upgrades will always happen under your control.
You can make your automation detect this situation by adding the -lockfile=readonly option to terraform init, which makes that command fail if it would need to change the dependency lock file in order to perform its work:
terraform init -lockfile=readonly
If you see that fail in your automation, then the appropriate fix would be to run terraform init without -lockfile=readonly inside your development environment, and then check the updated lock file into your version control system.
If you cannot initialize the remote backend in your development environment, you can skip that step but still install the needed providers by adding -backend=false, like this:
terraform init -backend=false
Getting the same providers reinstalled again prior to the apply step is the other part of the problem here.
The guide I linked above suggests to achieve this by archiving up the entire working directory after planning as an artifact and then re-extracting it at the same path in the apply step. That is the most thorough solution, and in particular is what Terraform Cloud does in order to ensure that any other files created on disk during planning (such as using the archive_file data source from the hashicorp/archive provider) will survive into the apply phase.
However, if you know that your configuration itself doesn't modify the filesystem during planning (which is a best practice, where possible) then it can also be valid to just re-run terraform init -lockfile=readonly before running terraform apply tfplan, which will therefore reinstall the previously-selected providers, along with all of the other working directory initialization work that terraform init usually does.
As a final note, tangential to the rest of this, it seems like Terraform was also printing a warning about a deprecated language feature and on your system the warning output became interleaved with the error output, making the first message confusing because it includes a paragraph from the warning inside of it.
I believe the intended error message text, without the errant extra content from the warning, is as follows:
Error: Inconsistent dependency lock file
│
│ The following dependency selections recorded in the lock file are
│ inconsistent with the configuration in the saved plan:
│ - provider registry.terraform.io/hashicorp/aws: required by this configuration but no version is selected
│ - provider registry.terraform.io/hashicorp/external: required by this configuration but no version is selected
│ - provider registry.terraform.io/hashicorp/google-beta: required by this configuration but no version is selected
│ - provider registry.terraform.io/hashicorp/google: required by this configuration but no version is selected
│ - provider registry.terraform.io/hashicorp/local: required by this configuration but no version is selected
│ - provider registry.terraform.io/hashicorp/null: required by this configuration but no version is selected
│ - provider registry.terraform.io/hashicorp/random: required by this configuration but no version is selected
│
│ A saved plan can be applied only to the same configuration it was created
│ from. Create a new plan from the updated configuration.
╵
There could be various reasons but main reason is you might update your template and you might use some plugins in my case i updated my template and added random module solution to that problem is to upgrade lockfile with below command
terraform init -upgrade
it will install new modules and make lock file consistent automatically
When you do terraform init, it generates a file called .terraform.lock.hcl. Make sure you have this file in the directory where you are running terraform apply. If you are using CICD to run terraform, make sure the file is available too when running terraform apply.
For example, in a GitLab CICD pipeline, you can add these files in the cache:
cache:
paths:
- .terraform
- .terraform.lock.hcl

upgrade from 0.12 to 0.13: Failed to instantiate provider "registry.terraform.io/-/aws" to obtain

I'm trying to upgrade from terraform 0.12 to 0.13.
it seems to have no specific problem of syntax when I run terraform 0.13upgrade nothing is changed.
only a file version.tf is added
+terraform {
+ required_providers {
+ aws = {
+ source = "hashicorp/aws"
+ }
+ }
+ required_version = ">= 0.13"
+}
and when I run terraform plan I got
Error: Could not load plugin
Plugin reinitialization required. Please run "terraform init".
Plugins are external binaries that Terraform uses to access and manipulate
resources. The configuration provided requires plugins which can't be located,
don't satisfy the version constraints, or are otherwise incompatible.
Terraform automatically discovers provider requirements from your
configuration, including providers used in child modules. To see the
requirements and constraints, run "terraform providers".
2 problems:
- Failed to instantiate provider "registry.terraform.io/-/aws" to obtain
schema: unknown provider "registry.terraform.io/-/aws"
- Failed to instantiate provider "registry.terraform.io/-/template" to obtain
schema: unknown provider "registry.terraform.io/-/template"
running terraform providers shows
Providers required by configuration:
.
├── provider[registry.terraform.io/hashicorp/aws]
├── module.bastion
│   ├── provider[registry.terraform.io/hashicorp/template]
│   └── provider[registry.terraform.io/hashicorp/aws]
└── module.vpc
└── provider[registry.terraform.io/hashicorp/aws] >= 2.68.*
Providers required by state:
provider[registry.terraform.io/-/aws]
provider[registry.terraform.io/-/template]
So my guess is form some reason I have -/aws instead of hashicorp/aws in my tfstate, however I can't find this specific string at all in the tfstate.
I tried:
running terraform init
terraform init -reconfigure
deleting the .terraform folder
deleting the ~/.terraform.d folder
So I'm running out of ideas on how to solve this problem
I followed the steps here
terraform state replace-provider registry.terraform.io/-/template registry.terraform.io/hashicorp/template
terraform state replace-provider registry.terraform.io/-/aws registry.terraform.io/hashicorp/aws
and it fixed my problem.

Terraform Enterprise module file location error

Need some assistance regarding using files with in modules from terraform enterprise. Git top folder structure is like this:
modules
README.md
main.tf
With in the modules the folder structure is like this:
modules
main.tf
file1.json
the main.tf within modules referring these file1.json through
like below
resource "aws_iam_policy" "deny_bill_policy" {
name = "OpsPolicy"
path = "/"
policy = "${file("${path.module}/file1.json")}"
}
The same program runs without any issues from my localpc to deploy on aws but when i run the same through terraform enterprise which pulls repo from git throwing the following error.
module.policy_roles.aws_iam_policy.deny_bill_policy: file: open file1.json: no such file or directory in: ${file("${path.module}/file1.json")}
fyi - there is no previous/old .terraform dir existed. Seems TFE handling module/paths is different. some one please assist me here.

Resources