Reusing Terraform modules without exposing any variables - terraform

Consider the following folder structure:
.
├── network-module/
│ ├── main.tf
│ └── variables.tf
├── dev.tfvars
├── prod.tfvars
├── main.tf
└── variables.tf
This is a simple Terraform configuration running under a GitLab pipeline.
network-module contains some variables for the network settings that change depending on the environment (dev, prod, etc) we deploy.
The main module has an environment variable that can be used to set the target environment.
What I want to achieve is to hide the variables that the network module needs from the parent module, so that users only need to specify the environment name and can omit the network configuration for the target environment altogether.
Using -var-file when running plan or apply works, but to do that I need to include all the variables the submodule needs in the parent module's variable file.
Basically, I don't want all the variables exposed to the outside world.
One option that comes to mind is to run some scripts inside the pipeline and change the contents of the configuration through string manipulation, but that feels wrong.
Do I have any other options?

Sure, just set your per-environment configuration in the root module.
locals {
network_module_args = {
dev = {
some_arg = "arg in dev"
}
prod = {
some_arg = "arg in prod"
}
}
}
module "network_module" {
source = "network-module"
some_arg = lookup(local.network_module_args, environment, "")
}

Related

Terraform child module does not inherit provider from root module

My problem
I am having trouble defining the provider for my module.
Terraform fails to find the provider's plugin when I run terraform init and it shows the wrong provider for my module when I run terraform providers.
Setup
I am using Terraform version 1.3.7 on Debian 11.
Here's an example of what I am trying to do.
I have a main.tf where is my main configuration and modules. In this example I use a single module for creating a docker container.
.
├── main.tf
└── modules/
└── container_module/
└── main.tf
In the root module project/main.tf file, I define the provider and call the module:
terraform {
required_providers {
docker = {
source = "kreuzwerker/docker"
version = "3.0.1"
}
}
}
provider "docker" {
host = "unix:///var/run/docker.sock"
}
module "container" {
source = "./modules/container_module"
}
In the modules/container_module/main.tf, I create the docker container resource:
resource "docker_image" "debian" {
name = "debian:latest"
}
resource "docker_container" "foo" {
image = docker_image.debian.image_id
name = "foo"
}
What I expect to happen
When I run terraform init, it should download the provider's plugin from kreuzwerker/docker.
What actually happens
Instead, terraform downloads the plugin from kreuzwerker/docker once, then attempt to download it again from hashicorp/docker.
Here's the command's output:
terraform init
Initializing modules...
- container in modules/container_module
Initializing the backend...
Initializing provider plugins...
- Finding latest version of hashicorp/docker...
- Finding kreuzwerker/docker versions matching "3.0.1"...
- Installing kreuzwerker/docker v3.0.1...
- Installed kreuzwerker/docker v3.0.1 (self-signed, key ID BD080C4571C6104C)
Partner and community providers are signed by their developers.
If you'd like to know more about provider signing, you can read about it here:
https://www.terraform.io/docs/cli/plugins/signing.html
╷
│ Error: Failed to query available provider packages
│
│ Could not retrieve the list of available versions for provider hashicorp/docker: provider registry registry.terraform.io does not have a provider named
│ registry.terraform.io/hashicorp/docker
│
│ Did you intend to use kreuzwerker/docker? If so, you must specify that source address in each module which requires that provider. To see which modules are currently depending on
│ hashicorp/docker, run the following command:
│ terraform providers
╵
When I run terraform providers I get two different sources depending on the file:
terraform providers
Providers required by configuration:
.
├── provider[registry.terraform.io/kreuzwerker/docker] 3.0.1
└── module.container
└── provider[registry.terraform.io/hashicorp/docker]
According to the documentation, the child modules should inherit the provider from their parent:
Default Behavior: Inherit Default Providers:
If the child module does not declare any configuration aliases, the providers argument is optional. If you omit it, a child module inherits all of the default provider configurations from its parent module. (Default provider configurations are ones that don't use the alias argument.)
I have already checked this
Do terraform modules need required_providers?
This answer confirms the provider inheritence.
Terraform provider's resources not available in other tf files:
This question didn't help.
Terraform, providers miss inherits on module
This answer to this similar question says that I should add required_providers in the child module, but it is for an older version and it contradicts what I saw elsewhere.
I have the same issue when I create a providers.tf file in the root directory.
My question
How should I declare my provider so that the child module can inherit the provider from the root module?
kreuzwerker/docker is not a hashicorp provider. Thus as explained here, you have to explicitly define required_providers in each module, as such providers are not inherited.

how to prevent resource creation in terraform when refering output of one project wiith common resources in another project

In one solution have 2 projects. The common-infra project is for creating ecs cluster and common ecs services like nginx used by all other services. ecs-service1 project contains resource definition for creating ecs services. I do reference resource ARNs created in common-infra project in my ecs-service1 project.
I first go to common-infra and do terraforma plan and create. Now the cluster and nginx service is up and running. Next I go to ecs-service1 and then to terraform plan. At this point it recoganizes the fact that I have linked to a module common-infra and shows that it will create the cluster and common service like nginx again.
Is there a way to arrange/reference the project in such a way that when I run terrafrom plan ecs-service1 it know that common-infra is already built and it knows the state and it only creates only the resoruces in ecs-services1 and only pulling in the ARNs reference created in common-infra?
.
├── ecs-service1
│ ├── main.tf
│ ├── task-def
│ │ ├── adt-api-staging2-task-definition.json
│ │ └── adt-frontend-staging2-task-definition.json
│ ├── terraform.tfstate
│ ├── terraform.tfstate.backup
│ └── variables.tf
├── common-infra
│ ├── main.tf
│ ├── task-def
│ │ └── my-nginx-staging2-task-definition.json
│ ├── terraform.tfstate
│ ├── user-data.sh
│ └── variables.tf
└── script
└── get-taskdefinitions.sh
common-infra main.tf
output "splat_lb_listener_http_80_arn"{
value = aws_lb_listener.http_80.arn
}
output "splat_lb_listener_http_8080_arn"{
value = aws_lb_listener.http_8080.arn
}
output "splat_ecs_cluster_arn" {
value = aws_ecs_cluster.ecs_cluster.arn
}
ecs-service1 main.tf
module "splat_common" {
source = "../common-infa"
}
resource "aws_ecs_service" "frontend_webapp_service" {
name = var.frontend_services["service_name"]
cluster = module.splat_common.splat_ecs_cluster_arn
...
}
There are a few solutions, but first I'd like to say that your ecs-service should be calling common-infra as a module only - so that all of the resource creation is handled at once (and not split apart as you describe).
Another solution would be to use terraform import to get the current state into your existing terraform. This is less than ideal, because now you have the same infrastructure being managed by 2 state files.
If you are including the common-infra because it provides some output, you should look into using data lookups (https://www.terraform.io/docs/language/data-sources/index.html). You can even reference output of other terraform state (https://www.terraform.io/docs/language/state/remote-state-data.html) (although I've never actually tried this, it can be done).

Terraform - Use of environment variables in TF files

I would like to use environment variables in my TF files. How can I mention them in those files?
I use Terraform cloud and define the variables in the environment variables section. Which means I don't use my cli to run terraform commands commands (no export TF_VAR & no -var or -var-file parameter).
I didn’t find any answer to this in forums nor in documentation.
Edit:
Maybe if I'll elaborate the things I've done it will be much clear.
So I have 2 environment variables named "username" and "password"
Those variables are defined in the environment variables section in Terraform Cloud.
In my main.tf file I create a mongo cluster which should be created with those username and password variables.
In the main variables.tf file I defined those variables as:
variable "username" {
type = string
}
variable "password" {
type = string
}
My main.tf file looks like:
module "eu-west-1-mongo-cluster" {
...
...
username = var.username
password = var.password
}
In the mongo submodule I defined them in variables.tf file as type string and in the mongo.tf file in the submodule I ref them as var.username and var.password
Thanks !
I don't think what you are trying to do is supported by Terraform Cloud. You are setting Environment Variables in the UI but you need to set Terraform Variables (see screenshot).
For Terraform Cloud backend you need to dynamically create *.auto.tfvars, none of the usual -var="myvar=123" or TF_VAR_myvar=123 or terraform.tfvars are currently supported from the remote backend. The error message below is produced from the CLI when running terraform 1.0.1 with a -var value:
│ Error: Run variables are currently not supported
│
│ The "remote" backend does not support setting run variables at this time. Currently the
│ only to way to pass variables to the remote backend is by creating a '*.auto.tfvars'
│ variables file. This file will automatically be loaded by the "remote" backend when the
│ workspace is configured to use Terraform v0.10.0 or later.
│
│ Additionally you can also set variables on the workspace in the web UI:
│ https://app.terraform.io/app/<org>/<workspace>/variables
My use case is in a CI/CD pipeline with the CLI using a remote Terraform Cloud backend, so created the *.auto.tfvars with for example:
# Environment variables set by pipeline
TF_VAR_cloudfront_origin_path="/mypath"
# Dynamically create *.auto.tfvars from environment variables
cat >dev.auto.tfvars <<EOL
cloudfront_origin_path="$TF_VAR_cloudfront_origin_path"
EOL
# Plan the run
terraform plan
As per https://www.terraform.io/docs/cloud/workspaces/variables.html#environment-variables, terraform will export all the provided variables.
So if you have defined an environment variable TF_VAR_name then you should be able to use as var.name in the terraform code.
Hope this helps.
I managed to get around this in my devops pipeline, by copying the terraform.tfvars file from my sub directory to the working directory as file.auto.tfvars
For example:
cp $(System.DefaultWorkingDirectory)/_demo/env/dev/variables.tfvars $(System.DefaultWorkingDirectory)/demo/variables.auto.tfvars

Inheritance of variables not working in terraform

For a terraform project I have the following folder structure:
- variables.tf
- cloudsql
- variables.tf
- main.tf
In the high-level variables.tf file I have defined:
variable "availability_type" {
default = {
prod = "REGIONAL"
dev = "ZONAL"
}
where prod and dev refer to production and dev workspaces.
In the cloudsql specific level variables.tf I have defined:
variable "availability_type" {
type = "map"
}
Finally in main.tf (under cloudsql) I use the variable
availability_type = "${var.availability_type[terraform.workspace]}"
However, this leads to
module.cloudsql.google_sql_database_instance.master: key "default" does not exist in map var.availability_type in:
${var.availability_type[terraform.workspace]}
Why does the cloudsql not inherit the variables?
As pointed out correctly by Matt Schuchard the workspace was default. Running
terraform workspace select dev
beforehand resolved the issue.

terraform init not working when specifying modules

I am new to terraform and trying to fix a small issue which I am facing when testing modules.
Below is the folder structure I have in my local computer.
I have below code at storage folder level
#-------storage/main.tf
provider "aws" {
region = "us-east-1"
}
resource "aws_s3_bucket" "my-first-terraform-bucket" {
bucket = "first-terraform-bucket"
acl = "private"
force_destroy = true
}
And below snippet from main_code level referencing storage module
#-------main_code/main.tf
module "storage" {
source = "../storage"
}
When I am issuing terraform init / plan / apply from storage folder it works absolutely fine and terraform creates the s3 bucket.
But when I am trying the same from main_code folder I am getting the below error -
main_code#DFW11-8041WL3: terraform init
Initializing modules...
- module.storage
Error downloading modules: Error loading modules: module storage: No Terraform configuration files found in directory: .terraform/modules/0d1a7f4efdea90caaf99886fa2f65e95
I have read many issue boards on stack overflow and other github issue forums but did not help resolving this. Not sure what I am missing!
Just update the existing modules by running terraform get --update. If this not work delete the .terraform folder.
I agree the comments from #rclement.
Several ways to troubleshooting terraform issues.
Clean .terraform folder and rerun terraform init.
This is always the first choice. But it takes time when you run terraform init next time, it starts installing all providers and modules again.
If you don't want to clean .terraform to save the deployment time, you can run terraform get --update=true
Most case is, you did some changes in modules, and it need be refreshed.
I had a similar issue but the problem for me was, The module I have created was looking for the providers.tf so had to add it for the modules as well and it worked.
├── main.tf
├── modules
│   └── droplets
│   ├── main.tf
│   ├── providers.tf
│   └── variables.tf
└── variables.tf
So my providers was present in the root locations previous which modules could not use so the issue for me.

Resources