Terraform Modules not working as expected - terraform

We are using a private Github and Terraform Cloud for our projects. Everything is able to talk to each other so there is no issue there. However, I'm trying to create modules for a project I started. I was able to make it work as regular terraform files, but when I try to convert to the module system I am having issues with getting the state imported.
We have a separate repository called tf-modules. In this repository, my directory setup:
> root
>> mymodule
>>> lambda.tf
>>> eventbridge.tf
>>> bucket.tf
These files manage the software being deployed in our AWS environment. They are being used across multiple environments for each of our customers (each separated out by environment [qa, dev, prod]).
In my terraform files, I have:
> root
>> CUSTNAME
>>> mymodule
>>>> main.tf
Inside main.tf I have:
module "mymodule" {
source = "git::https://github.com/myprivaterepo/tf-modules.git"
}
In my dev environment, everything is set up so I need to import the state. However, it's not detecting the resources at all. In the .terraform directory, it is downloading the entire repository (the root with the readme.md and all)
I'm fairly new to Terraform. Am I approaching this wrong or misunderstanding?
I am using the latest version of Terraform.

Since there is a sub-directory "mymodule", you should specify the whole path.
module "mymodule" {
source = "git::https://github.com/myprivaterepo/tf-modules.git//mymodule"
}
Refer to module sources - sub directory
Example: git::https://example.com/network.git//modules/vpc

Related

Terraform cloud failing when referencing module using relative local path

I have a repository with several separate configs which share some modules, and reference those modules using relative paths that look like ../../modules/rabbitmq. The directories are setup like this:
tf/
configs/
thing-1/
thing-2/
modules/
rabbitmq/
cluster/
...
The configs are setup with a remote backend to use TF Cloud for runs and state:
terraform {
backend "remote" {
hostname = "app.terraform.io"
organization = "my-org"
workspaces {
prefix = "config-1-"
}
}
}
Running terraform init works fine. When I try to run terrform plan locally, it gives me an error saying:
Initializing modules...
- rabbitmq in
Error: Unreadable module directory
Unable to evaluate directory symlink: lstat ../../modules: no such file or
directory
...as if the modules directory isn't being uploaded to TF Cloud or something. What gives?
It turns out the problem was (surprise, surprise!) it was not uploading the modules directory to TF Cloud. This is because neither the config nor the TF Cloud workspace settings contained any indication that this config folder was part of a larger filesystem. The default is to upload just the directory from which you are running terraform (and all of its contents).
To fix this, I had to visit the "Settings > General" page for the given workspace in Terraform Cloud, and change the Terraform Working Directory setting to specify the path of the config, relative to the relevant root directory - in this case: tf/configs/config-1
After that, running terraform plan displays a message indicating which parent directory it will upload in order to convey the entire context relevant to the workspace. 🎉
Update #mlsy answer with a screenshot. Using Terraform Cloud with free account. Resolving module source to using local file system.
terraform version
Terraform v1.1.7
on linux_amd64
Here is the thing I worked for me. I used required_version = ">= 0.11"
and then put all those tf files which have provider and module in a subfolder. Kept the version.tf which has required providers at root level. Somehow I have used the same folder path where terraform.exe is present. Then Built the project instead of executing at main.tf level or doing execution without building. It downloaded all providers and modules for me. I am yet to run on GCP.
enter image description here - Folder path on Windows
enter image description here - InteliJ structure
enter image description hereenter image description here
Use this source = "mhmdio/rabbitmq/aws
I faced this problem when I started. Go to hashicorp/terraform site and search module/provider block. They have full path. The code snippets are written this way. Once you get path run Terraform get -update Terraform init - upgrade
Which will download modules and provider locally.
Note: on cloud the modules are in repo but still you need to give path if by default repo path not mapped
I have similar issue, which I think someone might encounter.
I have issue where in my project the application is hosted inside folder1/folder2. However when I run terraform plan inside the folder2 there was an issue because it tried to load every folder from the root repository.
% terraform plan
Running plan in the remote backend. Output will stream here. Pressing Ctrl-C
will stop streaming the logs, but will not stop the plan running remotely.
Preparing the remote plan...
The remote workspace is configured to work with configuration at
infrastructure/prod relative to the target repository.
Terraform will upload the contents of the following directory,
excluding files or directories as defined by a .terraformignore file
at /Users/yokulguy/Development/arepository/.terraformignore (if it is present),
in order to capture the filesystem context the remote workspace expects:
/Users/yokulguy/Development/arepository
â•·
│ Error: Failed to upload configuration files: Failed to get symbolic link destination for "/Users/yokulguy/Development/arepository/docker/mysql/mysql.sock": lstat /private/var/run/mysqld: no such file or directory
│
│ The configured "remote" backend encountered an unexpected error. Sometimes this is caused by network connection problems, in which case you could retry the command. If the issue persists please open a support
│ ticket to get help resolving the problem.
╵
The solution is sometime I just need to remove the "bad" folder whic is the docker/mysql and then rerun the terraform plan and it works.

Should I version the .terraform folder?

I am starting to use Terraform and I have a .terraform folder created by "terraform init/apply" containing :
./plugins/linux_amd64/lock.json
./plugins/linux_amd64/terraform-provider-google
./plugins/modules/modules.json
terraform.tfstate
Should I version these files ? I would say no ...
The .terraform directory is a local cache where Terraform retains some files it will need for subsequent operations against this configuration. Its contents are not intended to be included in version control.
However, you can ensure that you can faithfully reproduce this directory on other systems by specifying certain things in your configuration that inform what Terraform will place in there:
Use required_providers in a terraform block to specify an exact version constraint for the Google Cloud Platform provider:
terraform {
required_providers {
google = "3.0.0"
}
}
(this relates to the .terraform/plugins directory)
In each module you call (which seems to be none so far, but perhaps in future), ensure its source refers to an exact version rather than to a floating branch (for VCS modules) or set version to an exact version (for modules from Terraform Registry):
module "example"
source = "git::https://github.com/example/example.git?ref=v2.0.0"
# ...
}
module "example"
source = "hashicorp/consul/aws"
version = "v1.2.0
}
(this relates to the .terraform/modules directory)
If you are using a remote backend, include the full configuration in the backend block inside the terraform block, rather than using the -backend-config argument to terraform init.
(this relates to the .terraform/terraform.tfstate file, which remembers your active backend configuration for later operations)

Terraform Enterprise module file location error

Need some assistance regarding using files with in modules from terraform enterprise. Git top folder structure is like this:
modules
README.md
main.tf
With in the modules the folder structure is like this:
modules
main.tf
file1.json
the main.tf within modules referring these file1.json through
like below
resource "aws_iam_policy" "deny_bill_policy" {
name = "OpsPolicy"
path = "/"
policy = "${file("${path.module}/file1.json")}"
}
The same program runs without any issues from my localpc to deploy on aws but when i run the same through terraform enterprise which pulls repo from git throwing the following error.
module.policy_roles.aws_iam_policy.deny_bill_policy: file: open file1.json: no such file or directory in: ${file("${path.module}/file1.json")}
fyi - there is no previous/old .terraform dir existed. Seems TFE handling module/paths is different. some one please assist me here.

terraform sub-module changes not being recognized in plan or apply

i have a terraform repo that looks something like this:
infrastructure
global
main.tf
The main.tf file references a module in a remote repository:
module "global" {
source = "git#github.com/company/repo//domain/global"
}
and the above module makes a reference to another module within the same remote repo: main.tf
module "global" {
source = "git#github.com/company/repo//infrastructure/global"
}
If i make a change in this module thats 3 levels deep, and then run terraform get and terraform init in the top level Terraform project followed by terraform plan, those changes aren't picked up.
Is there any reason for this?
i needed to do one of the following:
1) when running terraform init, i needed to pass the flag upgrade=true
2) or if running terraform get, i needed to pass the flag update=true
this downloads the latest versions of the requested modules

How to copy in additional files during "terraform get" that reside outside of the module directory?

The Hashicorp Consul repository contains a Terraform module for launching a Consul cluster in AWS. The module references several files that are found in the parent directory of the module under the shared/scripts directory here https://github.com/hashicorp/consul/tree/master/terraform
However, when I reference the module in one of my .tf files and run terraform get to download the module, the required files under shared/scripts/ are not included with the downloaded module files, leading to errors like the one described here
My module section in Terraform looks like this:
module "consul" {
source = "github.com/hashicorp/consul/terraform/aws"
key_name = "example_key"
key_path = "/path/to/example_key"
region = "us-east-1"
servers = "3"
platform = "centos7"
}
Is there anyway to have terraform get pull in files that live outside the module directory?
Thanks
From looking at what those files do, I'd just copy the one's you need (depending on whether you're deploying on debian or rhel), which will be 2/3 files and feeding them into provisioner "file":
https://www.terraform.io/docs/provisioners/file.html

Resources