Terraform cloud failing when referencing module using relative local path - terraform

I have a repository with several separate configs which share some modules, and reference those modules using relative paths that look like ../../modules/rabbitmq. The directories are setup like this:
tf/
configs/
thing-1/
thing-2/
modules/
rabbitmq/
cluster/
...
The configs are setup with a remote backend to use TF Cloud for runs and state:
terraform {
backend "remote" {
hostname = "app.terraform.io"
organization = "my-org"
workspaces {
prefix = "config-1-"
}
}
}
Running terraform init works fine. When I try to run terrform plan locally, it gives me an error saying:
Initializing modules...
- rabbitmq in
Error: Unreadable module directory
Unable to evaluate directory symlink: lstat ../../modules: no such file or
directory
...as if the modules directory isn't being uploaded to TF Cloud or something. What gives?

It turns out the problem was (surprise, surprise!) it was not uploading the modules directory to TF Cloud. This is because neither the config nor the TF Cloud workspace settings contained any indication that this config folder was part of a larger filesystem. The default is to upload just the directory from which you are running terraform (and all of its contents).
To fix this, I had to visit the "Settings > General" page for the given workspace in Terraform Cloud, and change the Terraform Working Directory setting to specify the path of the config, relative to the relevant root directory - in this case: tf/configs/config-1
After that, running terraform plan displays a message indicating which parent directory it will upload in order to convey the entire context relevant to the workspace. 🎉

Update #mlsy answer with a screenshot. Using Terraform Cloud with free account. Resolving module source to using local file system.
terraform version
Terraform v1.1.7
on linux_amd64

Here is the thing I worked for me. I used required_version = ">= 0.11"
and then put all those tf files which have provider and module in a subfolder. Kept the version.tf which has required providers at root level. Somehow I have used the same folder path where terraform.exe is present. Then Built the project instead of executing at main.tf level or doing execution without building. It downloaded all providers and modules for me. I am yet to run on GCP.
enter image description here - Folder path on Windows
enter image description here - InteliJ structure
enter image description hereenter image description here

Use this source = "mhmdio/rabbitmq/aws
I faced this problem when I started. Go to hashicorp/terraform site and search module/provider block. They have full path. The code snippets are written this way. Once you get path run Terraform get -update Terraform init - upgrade
Which will download modules and provider locally.
Note: on cloud the modules are in repo but still you need to give path if by default repo path not mapped

I have similar issue, which I think someone might encounter.
I have issue where in my project the application is hosted inside folder1/folder2. However when I run terraform plan inside the folder2 there was an issue because it tried to load every folder from the root repository.
% terraform plan
Running plan in the remote backend. Output will stream here. Pressing Ctrl-C
will stop streaming the logs, but will not stop the plan running remotely.
Preparing the remote plan...
The remote workspace is configured to work with configuration at
infrastructure/prod relative to the target repository.
Terraform will upload the contents of the following directory,
excluding files or directories as defined by a .terraformignore file
at /Users/yokulguy/Development/arepository/.terraformignore (if it is present),
in order to capture the filesystem context the remote workspace expects:
/Users/yokulguy/Development/arepository
â•·
│ Error: Failed to upload configuration files: Failed to get symbolic link destination for "/Users/yokulguy/Development/arepository/docker/mysql/mysql.sock": lstat /private/var/run/mysqld: no such file or directory
│
│ The configured "remote" backend encountered an unexpected error. Sometimes this is caused by network connection problems, in which case you could retry the command. If the issue persists please open a support
│ ticket to get help resolving the problem.
╵
The solution is sometime I just need to remove the "bad" folder whic is the docker/mysql and then rerun the terraform plan and it works.

Related

Terraform parent directory uploads issue

When using running terraform in remote execution mode, terraform uploads the content of the configuration directory to terraform cloud. In my case, the content of the infrastructure directory gets uploaded:
functions
function-a.zip
function-b.zip
...
infrastructure
main.tf
...
I have some terraform modules inside the infrastructure folder that are referencing binary files (zip files) that live outside the terraform config directory. These are build artifacts that are created prior to executing terraform. The issue is that those zip files are not copied over when executing terraform remotely which obviously causes some errors.
Is there a way to ensure that my zip files get uploaded to terraform cloud without having to put them inside the config directory?
Thanks to #paulg, I was able to selectively upload my function artifacts using a symlink.
Here's what my new folder structure looks like:
functions
function-a.zip
function-b.zip
...
infrastructure
artifacts
functions (symlink to ../../functions)
main.tf
...
The only thing I needed to do after that was to adjust my modules to reference my function artifacts through the symlink.

Terraform Modules not working as expected

We are using a private Github and Terraform Cloud for our projects. Everything is able to talk to each other so there is no issue there. However, I'm trying to create modules for a project I started. I was able to make it work as regular terraform files, but when I try to convert to the module system I am having issues with getting the state imported.
We have a separate repository called tf-modules. In this repository, my directory setup:
> root
>> mymodule
>>> lambda.tf
>>> eventbridge.tf
>>> bucket.tf
These files manage the software being deployed in our AWS environment. They are being used across multiple environments for each of our customers (each separated out by environment [qa, dev, prod]).
In my terraform files, I have:
> root
>> CUSTNAME
>>> mymodule
>>>> main.tf
Inside main.tf I have:
module "mymodule" {
source = "git::https://github.com/myprivaterepo/tf-modules.git"
}
In my dev environment, everything is set up so I need to import the state. However, it's not detecting the resources at all. In the .terraform directory, it is downloading the entire repository (the root with the readme.md and all)
I'm fairly new to Terraform. Am I approaching this wrong or misunderstanding?
I am using the latest version of Terraform.
Since there is a sub-directory "mymodule", you should specify the whole path.
module "mymodule" {
source = "git::https://github.com/myprivaterepo/tf-modules.git//mymodule"
}
Refer to module sources - sub directory
Example: git::https://example.com/network.git//modules/vpc

Unable to see the Terraform provider file after running terraform init

I am stuck in a path where when I run terraform init. the provider is not getting downloaded and it gives me no error. I am using the main.tf file and in it, I have provider "azurerm" syntax only. So when I run the terraform init I get the below output only and I see nowhere the terraform provider file getting initialized or getting downloaded. Logged in and authenticated to Azure login page too.
Terraform Code> terraform init
Initializing the backend...
Initializing provider plugins...
Terraform has been successfully initialized!
You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.
If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.
Terraform creates a hidden folder to store the provider. Make sure you set your OS permissions to see hidden files and folders.
From the documentation:
A hidden .terraform directory, which Terraform uses to manage cached provider plugins and modules, record which workspace is currently active, and record the last known backend configuration in case it needs to migrate state on the next run. This directory is automatically managed by Terraform, and is created during initialization.

terraform 1.0.6 cannot find providers if I can change directory

If I run terraform -version from the directory that I ran terraform init, terraform correctly finds the plugins.
But if I run terraform -version from any other directory, terraform does NOT find any provider plugins.
My ~/.terraformrc file looks like this:
provider_installation {
filesystem_mirror {
path = "/.terraform/providers"
include = ["registry.terraform.io/hashicorp/*"]
}
}
Inside that directory, I have the aws provider binary (it was placed there by terraform init, so I know the directory structure is correct:
/.terraform/providers/registry.terraform.io/hashicorp/aws/3.55.0/linux_amd64/terraform-provider-aws_v3.55.0_x5
When I cd to /, terraform correctly finds the provider:
# terraform -version
Terraform v1.0.6
on linux_amd64
+ provider registry.terraform.io/hashicorp/aws v3.55.0
But if I cd to /tmp, terraform does NOT find the provider:
/tmp # terraform -version
Terraform v1.0.6
on linux_amd64
So that tells me there is something not right with the .terraformrc file.
If I run that with TRACE, it doesn't say much:
2021-09-08T03:01:30.336Z [DEBUG] Attempting to open CLI config file: /root/.terraformrc
2021-09-08T03:01:30.336Z [INFO] Loading CLI configuration from /root/.terraformrc
2021-09-08T03:01:30.336Z [DEBUG] Explicit provider installation configuration is set
2021-09-08T03:01:30.336Z [TRACE] Selected provider installation method cliconfig.ProviderInstallationFilesystemMirror("/.terraform/providers") with includes [registry.terraform.io/hashicorp/*] and excludes []
2021-09-08T03:01:30.337Z [INFO] CLI command args: []string{"version", "-version"}
Terraform v1.0.6
on linux_amd64
How can I tell terraform to look in `/.terraform/providers" for all of it's providers?
I think in your investigations here you are confusing two different concepts for Terraform provider installation: a local filesystem mirror (which you've configured in your CLI configuration), and your current working directory's provider cache (which is where terraform init installs the providers for the current working directory).
A local filesystem mirror happens to support a similar directory structure to a provider cache directory, and so what you've achieved by running Terraform in the root directory is to trick Terraform into thinking that your local mirror is the local cache directory for that working directory. It doesn't work anywhere else because in that case these two separate directories are truly separate, which is the expected way to use Terraform.
The other relevant thing to know in what you tried here is that terraform version will only show providers that are already installed and activated by terraform init. Again, because you tricked Terraform into treating your filesystem mirror as the local cache directory it happened to find the required hashicorp/aws plugin there, but when you ran it elsewhere there was no local cache directory and so it returned nothing.
You mentioned that your goal is to have Terraform look for providers only in your local filesystem directory, in /.terraform. I'll note that a dot file in the root of your filesystem is a rather unusual place to keep a local filesystem mirror, but I'll use that path here anyway since you asked about it in your question.
First, configure the filesystem mirror as you already did, and also configure the "direct" installation method (installing from the origin registry) to exclude those same providers:
provider_installation {
filesystem_mirror {
path = "/.terraform/providers"
include = ["registry.terraform.io/hashicorp/*"]
}
direct {
exclude = ["registry.terraform.io/hashicorp/*"]
}
}
The above means that for any provider in the hashicorp namespace on the public registry, Terraform will only look in /.terraform. For all other providers, Terraform will contact the origin registry in the usual way and try to download the provider over the network.
With that configuration in place, change your shell's current working directory to be the root directory of your configuration, which is the directory containing your root set of .tf files. To initialize that as a working directory, which will install all of the needed providers and external modules, run:
terraform init
Terraform will then read the configuration to learn which providers it requires, and will automatically install each of them in turn. Because of the CLI configuration above, Terraform will not contact the origin registry for any of the hashicorp namespace providers, and will instead expect them to already be present in the mirror directory. Regardless of whether each provider was found in the local filesystem mirror or directly in the origin registry, they'll all then be cached in the .terraform/providers subdirectory of your working directory, which is the local cache directory.
You specifically asked about preventing Terraform from accessing the origin registry at all here and so I've answered with that in mind. However, a more common request is to have Terraform contact the registry the first time it installs each provider and then cache the result locally for future installation. If that is what you want to achieve here then you should enable the global provider plugin cache instead.
In that case, the global cache directory becomes a secondary read-through cache for the local cache directory I was describing above. When installing new providers to the local cache directory, Terraform will first check the global cache directory and either copy or symlink (depending on whether your system can support symlinks) the global cache entry into the local cache directory, and thus avoid re-downloading the same package from the registry again.

Terraform Enterprise module file location error

Need some assistance regarding using files with in modules from terraform enterprise. Git top folder structure is like this:
modules
README.md
main.tf
With in the modules the folder structure is like this:
modules
main.tf
file1.json
the main.tf within modules referring these file1.json through
like below
resource "aws_iam_policy" "deny_bill_policy" {
name = "OpsPolicy"
path = "/"
policy = "${file("${path.module}/file1.json")}"
}
The same program runs without any issues from my localpc to deploy on aws but when i run the same through terraform enterprise which pulls repo from git throwing the following error.
module.policy_roles.aws_iam_policy.deny_bill_policy: file: open file1.json: no such file or directory in: ${file("${path.module}/file1.json")}
fyi - there is no previous/old .terraform dir existed. Seems TFE handling module/paths is different. some one please assist me here.

Resources