Inheritance of variables not working in terraform - terraform

For a terraform project I have the following folder structure:
- variables.tf
- cloudsql
- variables.tf
- main.tf
In the high-level variables.tf file I have defined:
variable "availability_type" {
default = {
prod = "REGIONAL"
dev = "ZONAL"
}
where prod and dev refer to production and dev workspaces.
In the cloudsql specific level variables.tf I have defined:
variable "availability_type" {
type = "map"
}
Finally in main.tf (under cloudsql) I use the variable
availability_type = "${var.availability_type[terraform.workspace]}"
However, this leads to
module.cloudsql.google_sql_database_instance.master: key "default" does not exist in map var.availability_type in:
${var.availability_type[terraform.workspace]}
Why does the cloudsql not inherit the variables?

As pointed out correctly by Matt Schuchard the workspace was default. Running
terraform workspace select dev
beforehand resolved the issue.

Related

Terraform: remote module not following providers block - Warning: Reference to undefined provider

I'm trying to use a (private) remote terraform module, and trying to pass a different provider to it. For the remote module, there are no providers defined and to my understanding, it will use a the local provider instead.
I can't seem to be able to get it to use a provider alias - there are a few files at play here:
# main.tf
provider "aws" {
region = var.aws_region
}
provider "aws" {
alias = "replica_region"
region = var.replica_region
}
terraform {
backend "s3" {
}
}
# s3.tf
module "some-remote-module" {
source = 'git::ssh.......'
providers = {
aws = aws.replica_region
}
}
Whenever I plan (with terragrunt), The region is that of the primary aws provider config. I get the following warning, too:
│ Warning: Reference to undefined provider
│
│ on s3.tf line 12, in module "some-remote-module":
│ 12: aws = aws.replica_region
│
│ There is no explicit declaration for local provider name "aws" in
│ module.some-remote-module, so Terraform is assuming you
│ mean to pass a configuration for "hashicorp/aws".
│
│ If you also control the child module, add a required_providers entry named
│ "aws" with the source address "hashicorp/aws".
╵
Am I passing the providers in incorrectly? Is this even something that terraform is capable of? I'm using terraform 1.3. The remote module doesn't have any provider config.
The last paragraph of this message is suggesting that you modify the child module to include the following declaration so that it's explicit that when you say "aws" it means hashicorp/aws rather than a provider in some other namespace that might coincidentally also be called "aws":
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
}
}
}
This is what the error message means by a required_providers entry named "aws" with the source address "hashicorp/aws".
This allows Terraform to see for certain (rather than guessing) that the short name "aws" refers to the same provider in both the calling module and the called module.

Terraform Provider config inside child module

I’m trying to create modules that will handle helm deployments. the structure goes like this
root module - call helm-traefik module
there’s an input (cluster name) that will be used to fetch data sources for the provider config inside the helm child module.
child module - helm-traefik
main tf. - call helm module
variables.tf
values.yaml
child module - helm
providers.tf - both provider config for kubernetes and helm are using kubelogin for authentication
datasources.tf
main.tf - helm_release
variables.tf
The issue is that I’m getting an error with tf plan and it says that Kubernetes cluster is unreachable. I’ve been reading docs regarding providers and I think the reason why I’m getting errors is that I don’t have the provider config for Kubernetes and helm in the root module level. Any feasible solution for this use case? I want to have a separation between the helm module in a way it can be consumed regardless of the helm chart to be deployed.
Also, If I put the provider config from the child module to the root module, that would mean I need to create a provider config for each cluster I want to manage.
on helm - child module, this is how I generate the provider config
datasources.tf
locals {
# The purpose of the cluster.
purpose = split("-", "${var.cluster_name}")[0]
# The network environment of the cluster.
customer_env = split("-", "${var.cluster_name}")[2]
}
data "azurerm_kubernetes_cluster" "cluster" {
resource_group_name = "rg-${local.purpose}-${local.customer_env}-001"
name = "aks-${local.purpose}-${local.customer_env}"
}
provider.tf
provider "kubernetes" {
host = data.azurerm_kubernetes_cluster.cluster.kube_config.0.host
cluster_ca_certificate = base64decode(data.azurerm_kubernetes_cluster.cluster.kube_config.0.cluster_ca_certificate)
exec {
api_version = "client.authentication.k8s.io/v1beta1"
command = "kubelogin"
args = [
"get-token",
"--login", "spn",
"--environment", "AzurePublicCloud",
"--server-id", "6dae42f8-4368-4678-94ff-3960e28e3630",
"--tenant-id", data.azurerm_client_config.current.tenant_id,
"--client-id", data.azurerm_client_config.current.client_id,
"--client-secret", data.azurerm_key_vault_secret.service_principal_key.value,
]
}
}

Reusing Terraform modules without exposing any variables

Consider the following folder structure:
.
├── network-module/
│ ├── main.tf
│ └── variables.tf
├── dev.tfvars
├── prod.tfvars
├── main.tf
└── variables.tf
This is a simple Terraform configuration running under a GitLab pipeline.
network-module contains some variables for the network settings that change depending on the environment (dev, prod, etc) we deploy.
The main module has an environment variable that can be used to set the target environment.
What I want to achieve is to hide the variables that the network module needs from the parent module, so that users only need to specify the environment name and can omit the network configuration for the target environment altogether.
Using -var-file when running plan or apply works, but to do that I need to include all the variables the submodule needs in the parent module's variable file.
Basically, I don't want all the variables exposed to the outside world.
One option that comes to mind is to run some scripts inside the pipeline and change the contents of the configuration through string manipulation, but that feels wrong.
Do I have any other options?
Sure, just set your per-environment configuration in the root module.
locals {
network_module_args = {
dev = {
some_arg = "arg in dev"
}
prod = {
some_arg = "arg in prod"
}
}
}
module "network_module" {
source = "network-module"
some_arg = lookup(local.network_module_args, environment, "")
}

Can I configure Terraform to not use "env:" in the path for a workspace's state file path on S3?

I am using S3 for a remote state with Terraform.
When I use a Terraform workspace then the path for the state file for the environment ends up being s3://<bucket>/env:/<workspace>/<key> where <bucket> and key are specified in a terraform block. For example, with the below Terraform HCL:
terraform {
backend "s3" {
bucket = "devops"
key = "tf-state/abc/xyz.tfstate"
region = "us-east-1"
}
}
If we're using a Terraform workspace, like so:
$ terraform workspace new myapp-dev
$ terraform workspace select myapp-dev
$ terraform init
$ terraform apply
then we'll end up with the Terraform state file in the following path: s3://devops/env:/myapp-dev/tf-state/abc/xyz.tfstate
It seems that Terraform by default uses a subdirectory named env: under which it manages state files associated with workspaces. My question is can we add anything into the Terraform configuration that will allow us some control over the env: part of this?
When using a non-default workspace, the state path in the S3 bucket includes the
value of workspace_key_prefix, like:
/<workspace_key_prefix>/<workspace_name>/<key>
The default value of workspace_key_prefix is env:. You could change your configuration to something like:
terraform {
backend "s3" {
bucket = "devops"
key = "terraform.tfstate"
region = "us-east-1"
workspace_key_prefix = "tf-state"
}
}
Then, when using the myapp-dev workspace, the state file will be s3://devops/tf-state/myapp-dev/terraform.tfstate.

Terraform - Use of environment variables in TF files

I would like to use environment variables in my TF files. How can I mention them in those files?
I use Terraform cloud and define the variables in the environment variables section. Which means I don't use my cli to run terraform commands commands (no export TF_VAR & no -var or -var-file parameter).
I didn’t find any answer to this in forums nor in documentation.
Edit:
Maybe if I'll elaborate the things I've done it will be much clear.
So I have 2 environment variables named "username" and "password"
Those variables are defined in the environment variables section in Terraform Cloud.
In my main.tf file I create a mongo cluster which should be created with those username and password variables.
In the main variables.tf file I defined those variables as:
variable "username" {
type = string
}
variable "password" {
type = string
}
My main.tf file looks like:
module "eu-west-1-mongo-cluster" {
...
...
username = var.username
password = var.password
}
In the mongo submodule I defined them in variables.tf file as type string and in the mongo.tf file in the submodule I ref them as var.username and var.password
Thanks !
I don't think what you are trying to do is supported by Terraform Cloud. You are setting Environment Variables in the UI but you need to set Terraform Variables (see screenshot).
For Terraform Cloud backend you need to dynamically create *.auto.tfvars, none of the usual -var="myvar=123" or TF_VAR_myvar=123 or terraform.tfvars are currently supported from the remote backend. The error message below is produced from the CLI when running terraform 1.0.1 with a -var value:
│ Error: Run variables are currently not supported
│
│ The "remote" backend does not support setting run variables at this time. Currently the
│ only to way to pass variables to the remote backend is by creating a '*.auto.tfvars'
│ variables file. This file will automatically be loaded by the "remote" backend when the
│ workspace is configured to use Terraform v0.10.0 or later.
│
│ Additionally you can also set variables on the workspace in the web UI:
│ https://app.terraform.io/app/<org>/<workspace>/variables
My use case is in a CI/CD pipeline with the CLI using a remote Terraform Cloud backend, so created the *.auto.tfvars with for example:
# Environment variables set by pipeline
TF_VAR_cloudfront_origin_path="/mypath"
# Dynamically create *.auto.tfvars from environment variables
cat >dev.auto.tfvars <<EOL
cloudfront_origin_path="$TF_VAR_cloudfront_origin_path"
EOL
# Plan the run
terraform plan
As per https://www.terraform.io/docs/cloud/workspaces/variables.html#environment-variables, terraform will export all the provided variables.
So if you have defined an environment variable TF_VAR_name then you should be able to use as var.name in the terraform code.
Hope this helps.
I managed to get around this in my devops pipeline, by copying the terraform.tfvars file from my sub directory to the working directory as file.auto.tfvars
For example:
cp $(System.DefaultWorkingDirectory)/_demo/env/dev/variables.tfvars $(System.DefaultWorkingDirectory)/demo/variables.auto.tfvars

Resources