When using running terraform in remote execution mode, terraform uploads the content of the configuration directory to terraform cloud. In my case, the content of the infrastructure directory gets uploaded:
functions
function-a.zip
function-b.zip
...
infrastructure
main.tf
...
I have some terraform modules inside the infrastructure folder that are referencing binary files (zip files) that live outside the terraform config directory. These are build artifacts that are created prior to executing terraform. The issue is that those zip files are not copied over when executing terraform remotely which obviously causes some errors.
Is there a way to ensure that my zip files get uploaded to terraform cloud without having to put them inside the config directory?
Thanks to #paulg, I was able to selectively upload my function artifacts using a symlink.
Here's what my new folder structure looks like:
functions
function-a.zip
function-b.zip
...
infrastructure
artifacts
functions (symlink to ../../functions)
main.tf
...
The only thing I needed to do after that was to adjust my modules to reference my function artifacts through the symlink.
Related
If I run terraform -version from the directory that I ran terraform init, terraform correctly finds the plugins.
But if I run terraform -version from any other directory, terraform does NOT find any provider plugins.
My ~/.terraformrc file looks like this:
provider_installation {
filesystem_mirror {
path = "/.terraform/providers"
include = ["registry.terraform.io/hashicorp/*"]
}
}
Inside that directory, I have the aws provider binary (it was placed there by terraform init, so I know the directory structure is correct:
/.terraform/providers/registry.terraform.io/hashicorp/aws/3.55.0/linux_amd64/terraform-provider-aws_v3.55.0_x5
When I cd to /, terraform correctly finds the provider:
# terraform -version
Terraform v1.0.6
on linux_amd64
+ provider registry.terraform.io/hashicorp/aws v3.55.0
But if I cd to /tmp, terraform does NOT find the provider:
/tmp # terraform -version
Terraform v1.0.6
on linux_amd64
So that tells me there is something not right with the .terraformrc file.
If I run that with TRACE, it doesn't say much:
2021-09-08T03:01:30.336Z [DEBUG] Attempting to open CLI config file: /root/.terraformrc
2021-09-08T03:01:30.336Z [INFO] Loading CLI configuration from /root/.terraformrc
2021-09-08T03:01:30.336Z [DEBUG] Explicit provider installation configuration is set
2021-09-08T03:01:30.336Z [TRACE] Selected provider installation method cliconfig.ProviderInstallationFilesystemMirror("/.terraform/providers") with includes [registry.terraform.io/hashicorp/*] and excludes []
2021-09-08T03:01:30.337Z [INFO] CLI command args: []string{"version", "-version"}
Terraform v1.0.6
on linux_amd64
How can I tell terraform to look in `/.terraform/providers" for all of it's providers?
I think in your investigations here you are confusing two different concepts for Terraform provider installation: a local filesystem mirror (which you've configured in your CLI configuration), and your current working directory's provider cache (which is where terraform init installs the providers for the current working directory).
A local filesystem mirror happens to support a similar directory structure to a provider cache directory, and so what you've achieved by running Terraform in the root directory is to trick Terraform into thinking that your local mirror is the local cache directory for that working directory. It doesn't work anywhere else because in that case these two separate directories are truly separate, which is the expected way to use Terraform.
The other relevant thing to know in what you tried here is that terraform version will only show providers that are already installed and activated by terraform init. Again, because you tricked Terraform into treating your filesystem mirror as the local cache directory it happened to find the required hashicorp/aws plugin there, but when you ran it elsewhere there was no local cache directory and so it returned nothing.
You mentioned that your goal is to have Terraform look for providers only in your local filesystem directory, in /.terraform. I'll note that a dot file in the root of your filesystem is a rather unusual place to keep a local filesystem mirror, but I'll use that path here anyway since you asked about it in your question.
First, configure the filesystem mirror as you already did, and also configure the "direct" installation method (installing from the origin registry) to exclude those same providers:
provider_installation {
filesystem_mirror {
path = "/.terraform/providers"
include = ["registry.terraform.io/hashicorp/*"]
}
direct {
exclude = ["registry.terraform.io/hashicorp/*"]
}
}
The above means that for any provider in the hashicorp namespace on the public registry, Terraform will only look in /.terraform. For all other providers, Terraform will contact the origin registry in the usual way and try to download the provider over the network.
With that configuration in place, change your shell's current working directory to be the root directory of your configuration, which is the directory containing your root set of .tf files. To initialize that as a working directory, which will install all of the needed providers and external modules, run:
terraform init
Terraform will then read the configuration to learn which providers it requires, and will automatically install each of them in turn. Because of the CLI configuration above, Terraform will not contact the origin registry for any of the hashicorp namespace providers, and will instead expect them to already be present in the mirror directory. Regardless of whether each provider was found in the local filesystem mirror or directly in the origin registry, they'll all then be cached in the .terraform/providers subdirectory of your working directory, which is the local cache directory.
You specifically asked about preventing Terraform from accessing the origin registry at all here and so I've answered with that in mind. However, a more common request is to have Terraform contact the registry the first time it installs each provider and then cache the result locally for future installation. If that is what you want to achieve here then you should enable the global provider plugin cache instead.
In that case, the global cache directory becomes a secondary read-through cache for the local cache directory I was describing above. When installing new providers to the local cache directory, Terraform will first check the global cache directory and either copy or symlink (depending on whether your system can support symlinks) the global cache entry into the local cache directory, and thus avoid re-downloading the same package from the registry again.
I have a repository with several separate configs which share some modules, and reference those modules using relative paths that look like ../../modules/rabbitmq. The directories are setup like this:
tf/
configs/
thing-1/
thing-2/
modules/
rabbitmq/
cluster/
...
The configs are setup with a remote backend to use TF Cloud for runs and state:
terraform {
backend "remote" {
hostname = "app.terraform.io"
organization = "my-org"
workspaces {
prefix = "config-1-"
}
}
}
Running terraform init works fine. When I try to run terrform plan locally, it gives me an error saying:
Initializing modules...
- rabbitmq in
Error: Unreadable module directory
Unable to evaluate directory symlink: lstat ../../modules: no such file or
directory
...as if the modules directory isn't being uploaded to TF Cloud or something. What gives?
It turns out the problem was (surprise, surprise!) it was not uploading the modules directory to TF Cloud. This is because neither the config nor the TF Cloud workspace settings contained any indication that this config folder was part of a larger filesystem. The default is to upload just the directory from which you are running terraform (and all of its contents).
To fix this, I had to visit the "Settings > General" page for the given workspace in Terraform Cloud, and change the Terraform Working Directory setting to specify the path of the config, relative to the relevant root directory - in this case: tf/configs/config-1
After that, running terraform plan displays a message indicating which parent directory it will upload in order to convey the entire context relevant to the workspace. 🎉
Update #mlsy answer with a screenshot. Using Terraform Cloud with free account. Resolving module source to using local file system.
terraform version
Terraform v1.1.7
on linux_amd64
Here is the thing I worked for me. I used required_version = ">= 0.11"
and then put all those tf files which have provider and module in a subfolder. Kept the version.tf which has required providers at root level. Somehow I have used the same folder path where terraform.exe is present. Then Built the project instead of executing at main.tf level or doing execution without building. It downloaded all providers and modules for me. I am yet to run on GCP.
enter image description here - Folder path on Windows
enter image description here - InteliJ structure
enter image description hereenter image description here
Use this source = "mhmdio/rabbitmq/aws
I faced this problem when I started. Go to hashicorp/terraform site and search module/provider block. They have full path. The code snippets are written this way. Once you get path run Terraform get -update Terraform init - upgrade
Which will download modules and provider locally.
Note: on cloud the modules are in repo but still you need to give path if by default repo path not mapped
I have similar issue, which I think someone might encounter.
I have issue where in my project the application is hosted inside folder1/folder2. However when I run terraform plan inside the folder2 there was an issue because it tried to load every folder from the root repository.
% terraform plan
Running plan in the remote backend. Output will stream here. Pressing Ctrl-C
will stop streaming the logs, but will not stop the plan running remotely.
Preparing the remote plan...
The remote workspace is configured to work with configuration at
infrastructure/prod relative to the target repository.
Terraform will upload the contents of the following directory,
excluding files or directories as defined by a .terraformignore file
at /Users/yokulguy/Development/arepository/.terraformignore (if it is present),
in order to capture the filesystem context the remote workspace expects:
/Users/yokulguy/Development/arepository
â•·
│ Error: Failed to upload configuration files: Failed to get symbolic link destination for "/Users/yokulguy/Development/arepository/docker/mysql/mysql.sock": lstat /private/var/run/mysqld: no such file or directory
│
│ The configured "remote" backend encountered an unexpected error. Sometimes this is caused by network connection problems, in which case you could retry the command. If the issue persists please open a support
│ ticket to get help resolving the problem.
╵
The solution is sometime I just need to remove the "bad" folder whic is the docker/mysql and then rerun the terraform plan and it works.
Created a simple Azure devops release pipeline to provision a resource group. Tested terraform script with remote state file locally and checked in the code to git. This is how the code is organized:
IAC (root folder)
/bin/terraform.exe
main.tf (this has terraform configuration with remote state)
Created a release pipeline pointing to this repository as code. Pipleline gives the alias _IAC to the artifact
in the pipeline I have powershell activities to login to azure using a service principal
then the following line:
$(System.DefaultWorkingDirectory)/_IAC/bin/terraform init
This command executes but says there is no terraform configuration file.
2020-03-05T02:23:04.4536130Z [0m[1mTerraform initialized in an empty directory![0m
2020-03-05T02:23:04.4536786Z
2020-03-05T02:23:04.4556953Z The directory has no Terraform configuration files. You may begin working
2020-03-05T02:23:04.4559693Z with Terraform immediately by creating Terraform configuration files.[0m
the working directory where azure release pipeline agent is running did not have the configuration file.
I had to use a copy operation to copy the main.tf file to the $(AgentWorkingDirectory)
Need some assistance regarding using files with in modules from terraform enterprise. Git top folder structure is like this:
modules
README.md
main.tf
With in the modules the folder structure is like this:
modules
main.tf
file1.json
the main.tf within modules referring these file1.json through
like below
resource "aws_iam_policy" "deny_bill_policy" {
name = "OpsPolicy"
path = "/"
policy = "${file("${path.module}/file1.json")}"
}
The same program runs without any issues from my localpc to deploy on aws but when i run the same through terraform enterprise which pulls repo from git throwing the following error.
module.policy_roles.aws_iam_policy.deny_bill_policy: file: open file1.json: no such file or directory in: ${file("${path.module}/file1.json")}
fyi - there is no previous/old .terraform dir existed. Seems TFE handling module/paths is different. some one please assist me here.
The Hashicorp Consul repository contains a Terraform module for launching a Consul cluster in AWS. The module references several files that are found in the parent directory of the module under the shared/scripts directory here https://github.com/hashicorp/consul/tree/master/terraform
However, when I reference the module in one of my .tf files and run terraform get to download the module, the required files under shared/scripts/ are not included with the downloaded module files, leading to errors like the one described here
My module section in Terraform looks like this:
module "consul" {
source = "github.com/hashicorp/consul/terraform/aws"
key_name = "example_key"
key_path = "/path/to/example_key"
region = "us-east-1"
servers = "3"
platform = "centos7"
}
Is there anyway to have terraform get pull in files that live outside the module directory?
Thanks
From looking at what those files do, I'd just copy the one's you need (depending on whether you're deploying on debian or rhel), which will be 2/3 files and feeding them into provisioner "file":
https://www.terraform.io/docs/provisioners/file.html