I would like to use environment variables in my TF files. How can I mention them in those files?
I use Terraform cloud and define the variables in the environment variables section. Which means I don't use my cli to run terraform commands commands (no export TF_VAR & no -var or -var-file parameter).
I didn’t find any answer to this in forums nor in documentation.
Edit:
Maybe if I'll elaborate the things I've done it will be much clear.
So I have 2 environment variables named "username" and "password"
Those variables are defined in the environment variables section in Terraform Cloud.
In my main.tf file I create a mongo cluster which should be created with those username and password variables.
In the main variables.tf file I defined those variables as:
variable "username" {
type = string
}
variable "password" {
type = string
}
My main.tf file looks like:
module "eu-west-1-mongo-cluster" {
...
...
username = var.username
password = var.password
}
In the mongo submodule I defined them in variables.tf file as type string and in the mongo.tf file in the submodule I ref them as var.username and var.password
Thanks !
I don't think what you are trying to do is supported by Terraform Cloud. You are setting Environment Variables in the UI but you need to set Terraform Variables (see screenshot).
For Terraform Cloud backend you need to dynamically create *.auto.tfvars, none of the usual -var="myvar=123" or TF_VAR_myvar=123 or terraform.tfvars are currently supported from the remote backend. The error message below is produced from the CLI when running terraform 1.0.1 with a -var value:
│ Error: Run variables are currently not supported
│
│ The "remote" backend does not support setting run variables at this time. Currently the
│ only to way to pass variables to the remote backend is by creating a '*.auto.tfvars'
│ variables file. This file will automatically be loaded by the "remote" backend when the
│ workspace is configured to use Terraform v0.10.0 or later.
│
│ Additionally you can also set variables on the workspace in the web UI:
│ https://app.terraform.io/app/<org>/<workspace>/variables
My use case is in a CI/CD pipeline with the CLI using a remote Terraform Cloud backend, so created the *.auto.tfvars with for example:
# Environment variables set by pipeline
TF_VAR_cloudfront_origin_path="/mypath"
# Dynamically create *.auto.tfvars from environment variables
cat >dev.auto.tfvars <<EOL
cloudfront_origin_path="$TF_VAR_cloudfront_origin_path"
EOL
# Plan the run
terraform plan
As per https://www.terraform.io/docs/cloud/workspaces/variables.html#environment-variables, terraform will export all the provided variables.
So if you have defined an environment variable TF_VAR_name then you should be able to use as var.name in the terraform code.
Hope this helps.
I managed to get around this in my devops pipeline, by copying the terraform.tfvars file from my sub directory to the working directory as file.auto.tfvars
For example:
cp $(System.DefaultWorkingDirectory)/_demo/env/dev/variables.tfvars $(System.DefaultWorkingDirectory)/demo/variables.auto.tfvars
Related
Consider the following folder structure:
.
├── network-module/
│ ├── main.tf
│ └── variables.tf
├── dev.tfvars
├── prod.tfvars
├── main.tf
└── variables.tf
This is a simple Terraform configuration running under a GitLab pipeline.
network-module contains some variables for the network settings that change depending on the environment (dev, prod, etc) we deploy.
The main module has an environment variable that can be used to set the target environment.
What I want to achieve is to hide the variables that the network module needs from the parent module, so that users only need to specify the environment name and can omit the network configuration for the target environment altogether.
Using -var-file when running plan or apply works, but to do that I need to include all the variables the submodule needs in the parent module's variable file.
Basically, I don't want all the variables exposed to the outside world.
One option that comes to mind is to run some scripts inside the pipeline and change the contents of the configuration through string manipulation, but that feels wrong.
Do I have any other options?
Sure, just set your per-environment configuration in the root module.
locals {
network_module_args = {
dev = {
some_arg = "arg in dev"
}
prod = {
some_arg = "arg in prod"
}
}
}
module "network_module" {
source = "network-module"
some_arg = lookup(local.network_module_args, environment, "")
}
The expectation is that when running a terragrunt run-all apply from the root directory, a provider.tf file will be created in the sub directories. I've verified my backend is able to talk to my azure storage account and it will create a terraform.tfstate file there. I expect a provider.tf file to appear under each "service#" folder. I'm extremely new to terragrunt. This is my first exercise with it. I am not actually trying to deploy any terraform resources. Just have the provider.tf file created in my sub directories. TF version is 1.1.5. Terragrunt version is 0.36.1.
my folder structure
tfpractice
├───terragrunt.hcl
├───environment_vars.yaml
├───dev
│ ├───service1
│ ├───terragrunt.hcl
│ └───service2
│ ├───terragrunt.hcl
└───prod
├───service1
│ ├───terragrunt.hcl
└───service2
│ ├───terragrunt.hcl
root terragrunt.hcl config
# Generate provider configuration for all child directories
generate "provider" {
path = "provider.tf"
if_exists = "overwrite"
contents = <<EOF
terraform {
required_providers {
azurerm = {
source = "hashicorp/azurerm"
version = "2.94.0"
}
}
backend "azurerm" {}
}
provider "azurerm" {
features {}
}
EOF
}
# Remote backend settings for all child directories
remote_state {
backend = "azurerm"
config = {
resource_group_name = local.env_vars.resource_group_name
storage_account_name = local.env_vars.storage_account_name
container_name = local.env_vars.container_name
key = "${path_relative_to_include()}/terraform.tfstate"
}
}
# Collect values from environment_vars.yaml and set as local variables
locals {
env_vars = yamldecode(file("environment_vars.yaml"))
}
environment_vars.yaml config
resource_group_name: "my-tf-test"
storage_account_name: "mystorage"
container_name: "tfstate"
terragrunt.hcl config in the service# folders
# Collect values from parent environment_vars.yaml file and set as local variables
locals {
env_vars = yamldecode(file(find_in_parent_folders("environment_vars.yaml")))
}
# Include all settings from the root terragrunt.hcl file
include {
path = find_in_parent_folders()
}
When i run the terragrunt run-all apply this is the output
Are you sure you want to run 'terragrunt apply' in each folder of the stack described above? (y/n) y
Initializing the backend...
Initializing the backend...
Initializing the backend...
Initializing the backend...
Initializing provider plugins...
Terraform has been successfully initialized!
You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.
If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.
Initializing provider plugins...
Initializing provider plugins...
Terraform has been successfully initialized!
You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.
If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.
Terraform has been successfully initialized!
You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.
If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.
Initializing provider plugins...
Terraform has been successfully initialized!
You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.
If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.
No changes. Your infrastructure matches the configuration.
No changes. Your infrastructure matches the configuration.
Terraform has compared your real infrastructure against your configuration
and found no differences, so no changes are needed.
No changes. Your infrastructure matches the configuration.
Terraform has compared your real infrastructure against your configuration
and found no differences, so no changes are needed.
No changes. Your infrastructure matches the configuration.
Terraform has compared your real infrastructure against your configuration
and found no differences, so no changes are needed.
Terraform has compared your real infrastructure against your configuration
and found no differences, so no changes are needed.
Apply complete! Resources: 0 added, 0 changed, 0 destroyed.
Apply complete! Resources: 0 added, 0 changed, 0 destroyed.
Apply complete! Resources: 0 added, 0 changed, 0 destroyed.
Apply complete! Resources: 0 added, 0 changed, 0 destroyed.
It looks successful, however NO provider.tf files show up in ANY directory. Not even root. It just creates a terraform.tfstate file under the service# directories.
But...if I run a terragrunt init from the root directory, it will create the provider.tf file as expected in the root directory. This does NOT work in the service# directories although the terragrunt init is successful.
What am I missing? This is the most basic terragrunt use case, but the examples lead me to believe this should just work.
I got it to work, but the terragrunt run-all apply command doesn't work at all. Instead, at the root I have to run terragrunt apply. If you don't at the root, then all sub folders get grouped together rather than in a dev/prod sub folder. Then I have to go to every sub folder and run this again. It's the only way i've gotten it to work.
I configure my terraform using a GCS backend, with a workspace. My CI environment exactly has access to the state file it requires for the workspace.
terraform {
required_version = ">= 0.14"
backend "gcs" {
prefix = "<my prefix>"
bucket = "<my bucket>"
credentials = "credentials.json"
}
}
I define the output of my terraform module inside output.tf:
output "base_api_url" {
description = "Base url for the deployed cloud run service"
value = google_cloud_run_service.api.status[0].url
}
My CI Server runs terraform apply -auto-approve -lock-timeout 15m. It succeeds and it shows me the output in the console logs:
Outputs:
base_api_url = "https://<my project url>.run.app"
When I call terraform output base_api_url and it gives me the following error:
│ Warning: No outputs found
│
│ The state file either has no outputs defined, or all the defined outputs
│ are empty. Please define an output in your configuration with the `output`
│ keyword and run `terraform refresh` for it to become available. If you are
│ using interpolation, please verify the interpolated value is not empty. You
│ can use the `terraform console` command to assist.
I try calling terraform refresh like it mentions in the warning and it tells me:
╷
│ Warning: Empty or non-existent state
│
│ There are currently no remote objects tracked in the state, so there is
│ nothing to refresh.
╵
I'm not sure what to do. I'm calling terraform output RIGHT after I call apply, but it's still giving me no outputs. What am I doing wrong?
I had the exact same issue, and this was happening because I was running terraform commands from a different path than the one I was at.
terraform -chdir="another/path" apply
And then running the output command would fail with that error. Unless you cd to that path before running the output command.
cd "another/path"
terraform output
Terraform version: 0.12.24
This is really weird because I have used the TF_VAR_ substitution syntax before and it has worked fine.
provider.tf
# Configure the AWS Provider
provider "aws" {
version = "~> 2.0"
region = "ap-southeast-2"
access_key = var.aws_access_key_id
secret_key = var.aws_secret_access_key
}
vars.tf
variable "aws_access_key_id" {
description = "Access Key for AWS IAM User"
}
variable "aws_secret_access_key" {
description = "Secret Access Key for AWS IAM User"
}
variable "terraform_cloud_token" {
description = "Token used to log into Terraform Cloud via the CLI"
}
backend.tf for terraform cloud
terraform {
backend "remote" {
organization = "xx"
workspaces {
name = "xx"
}
}
}
Build logs
---------------
TF_VAR_aws_secret_access_key=***
TF_VAR_aws_access_key_id=***
TF_VAR_terraform_cloud_token=***
---------------
It also fails locally when I try to run this in a local Docker Container
Dockerfile
FROM hashicorp/terraform:0.12.24
COPY . /app
COPY .terraformrc $HOME
ENV TF_VAR_aws_secret_access_key 'XX'
ENV TF_VAR_aws_access_key_id 'XX'
ENV TF_VAR_terraform_cloud_token 'XX'
WORKDIR /app
ENTRYPOINT ["/app/.github/actions/terraform-plan/entrypoint.sh"]
entrypoint.sh
#!/bin/sh -l
# move terraform cloud configuration file to user root as expected
# by the backend resource
mv ./.terraformrc ~/
terraform init
terraform plan
output from docker container run
$ docker run -it tf-test
---------------
TF_VAR_aws_secret_access_key=XX
TF_VAR_aws_access_key_id=XX
TF_VAR_terraform_cloud_token=XX
---------------
Initializing the backend...
Successfully configured the backend "remote"! Terraform will automatically
use this backend unless the backend configuration changes.
Initializing provider plugins...
- Checking for available provider plugins...
- Downloading plugin for provider "aws" (hashicorp/aws) 2.56.0...
Terraform has been successfully initialized!
You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.
If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.
Running plan in the remote backend. Output will stream here. Pressing Ctrl-C
will stop streaming the logs, but will not stop the plan running remotely.
Preparing the remote plan...
To view this run in a browser, visit:
https://app.terraform.io/app/XX/XX/runs/run-XX
Waiting for the plan to start...
Terraform v0.12.24
Configuring remote state backend...
Initializing Terraform configuration...
2020/04/03 01:43:04 [DEBUG] Using modified User-Agent: Terraform/0.12.24 TFC/05d5abc3eb
Error: No value for required variable
on vars.tf line 1:
1: variable "aws_access_key_id" {
The root module input variable "aws_access_key_id" is not set, and has no
default value. Use a -var or -var-file command line argument to provide a
value for this variable.
Error: No value for required variable
on vars.tf line 5:
5: variable "aws_secret_access_key" {
The root module input variable "aws_secret_access_key" is not set, and has no
default value. Use a -var or -var-file command line argument to provide a
value for this variable.
Error: No value for required variable
on vars.tf line 9:
9: variable "terraform_cloud_token" {
The root module input variable "terraform_cloud_token" is not set, and has no
default value. Use a -var or -var-file command line argument to provide a
value for this variable.
Okay... it is confusing because the logs generated in Terraform's VMs are streamed to your own terminal/run logs.
But this is what I found out. There are two options available to you when you use Terraform Cloud.
Use Terraform's VMs to run your terraform commands
Use your own (or your CI/CD platform's) infrastructure to run those terraform commands.
If you choose the first option (which is annoyingly the default)... you must set your environment variables within the Terraform Cloud Dashboard. This is because all terraform commands for this execution type are run in their VMs and the environment variables in your local environment, for good security reasons, aren't passed through to Terraform.
If you have the remote option selected, once you do this, it will work as expected.
For a terraform project I have the following folder structure:
- variables.tf
- cloudsql
- variables.tf
- main.tf
In the high-level variables.tf file I have defined:
variable "availability_type" {
default = {
prod = "REGIONAL"
dev = "ZONAL"
}
where prod and dev refer to production and dev workspaces.
In the cloudsql specific level variables.tf I have defined:
variable "availability_type" {
type = "map"
}
Finally in main.tf (under cloudsql) I use the variable
availability_type = "${var.availability_type[terraform.workspace]}"
However, this leads to
module.cloudsql.google_sql_database_instance.master: key "default" does not exist in map var.availability_type in:
${var.availability_type[terraform.workspace]}
Why does the cloudsql not inherit the variables?
As pointed out correctly by Matt Schuchard the workspace was default. Running
terraform workspace select dev
beforehand resolved the issue.