"Variables may not be used here" during terraform init - terraform

I am using Terraform snowflake plugins. I want to use ${terraform.workspace} variable in terraform scope.
terraform {
required_providers {
snowflake = {
source = "chanzuckerberg/snowflake"
version = "0.20.0"
}
}
backend "s3" {
bucket = "data-pf-terraform-backend-${terraform.workspace}"
key = "backend/singlife/landing"
region = "ap-southeast-1"
dynamodb_table = "data-pf-snowflake-terraform-state-lock-${terraform.workspace}"
}
}
But I got this error. Variables are not available in this scope?
Error: Variables not allowed
on provider.tf line 9, in terraform:
9: bucket = "data-pf-terraform-backend-${terraform.workspace}"
Variables may not be used here.
Error: Variables not allowed
on provider.tf line 12, in terraform:
12: dynamodb_table = "data-pf-snowflake-terraform-state-lock-${terraform.workspace}"
Variables may not be used here.

Set backend.tf
terraform {
backend "azurerm" {}
}
Create a file backend.conf
storage_account_name = "deploymanager"
container_name = "terraform"
key = "production.terraform.tfstate"
Run:
terraform init -backend-config=backend.conf

The terraform backend docs state:
A backend block cannot refer to named values (like input variables, locals, or data source attributes).
However, the s3 backend docs show you how you can partition some s3 storage based on the current workspace, so each workspace gets its own independent state file. You just can't specify a distinct bucket for each workspace. You can only specify one bucket for all workspaces, but the s3 backend will add the workspace prefix to the path:
When using a non-default workspace, the state path will be /workspace_key_prefix/workspace_name/key (see also the workspace_key_prefix configuration).
And one dynamo table will suffice for all workspaces. So just use:
backend "s3" {
bucket = "data-pf-terraform-backend"
key = "terraform.tfstate"
region = "ap-southeast-1"
dynamodb_table = "data-pf-snowflake-terraform-state-lock"
}
And switch workspaces as appropriate before deployments.

But how is Jhonny's answer any different? You still cannot put variables in backend.conf, which was the initial question.
Initializing the backend...
╷
│ Error: Variables not allowed
│
│ on backend.conf line 1:
│ 1: bucket = "server-${var.account_id}"
│
│ Variables may not be used here.
The only way for now is to use a wrapper script that provides env variables, unfortunately.

You could checkout terragrunt, which is a thin wrapper that provides extra tools for keeping your configurations DRY, working with multiple Terraform modules, and managing remote state.
See here: https://terragrunt.gruntwork.io/docs/getting-started/quick-start/#keep-your-backend-configuration-dry

Check Jhonny's solution first:
https://stackoverflow.com/a/69664785/132438
(keeping this one for historical reference)
Seems like a specific instance of a more common problem in Terraform: Concatenating variables.
Using locals to concatenate should fix it. See https://www.terraform.io/docs/configuration/locals.html
An example from https://stackoverflow.com/a/61506549/132438:
locals {
BUCKET_NAME = [
"bh.${var.TENANT_NAME}.o365.attachments",
"bh.${var.TENANT_NAME}.o365.eml"
]
}
resource "aws_s3_bucket" "b" {
bucket = "${element(local.BUCKET_NAME, 2)}"
acl = "private"
}

Related

terraform remote state best practice

I am creating a few terraform modules and inside the modules I also create the resources for storing remote state ( a S3 bucket and dynamodb table)
when I then use the module I launch I write something like this:
# terraform {
# backend "s3" {
# bucket = "name"
# key = "xxxx.tfstate"
# region = "rrrr"
# encrypt = true
# dynamodb_table = "trrrrr"
# }
# }
terraform {
required_version = ">= 1.0.0, < 2.0.0"
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 4.0"
}
}
}
provider "aws" {
region = var.region
}
module "mymodule" {
source = "./module/mymodule"
region = "param1"
prefix = "param2"
project = "xxxx"
username = "ddd"
contact = "myemail"
table_name = "table-name"
bucket_name = "uniquebucketname"
}
where I leave commented out the part on remote state and I leave terraform to create a local state and create all resources (including the bucket and the DynamoDB table).
After the resources are created
I re-run terraform init and I migrate the state to s3.
I wonder if this is a good practice or if there is something better for maintaining the state and also provide isolation.
That is an interesting approach. I would create the S3 bucket manually since it's a 1 time create for your state file mgmt. Then I would add a policy to prevent deletion | see here: https://serverfault.com/questions/226700/how-do-i-prevent-deletion-of-s3-buckets | & versioning and/or a bkp.
Beyond this approach there are better practises such as using tools like Terraform Cloud which is free for 5 users. Then in your terraform root module configuration you would put this:
terraform {
backend "remote" {
hostname = "app.terraform.io"
organization = "YOUR-TERRAFORM-CLOUD-ORG"
workspaces {
# name = "" ## For single workspace jobs
# prefix = "" ## for multiple workspaces
name = "YOUR-ROOT-MODULE-WORKSPACE-NAME"
}
}
}
More details in this similar Q&A: Initial setup of terraform backend using terraform

terragrunt not accepting vars files

I am trying to use a 2-repo IaC with the so called back-end being in the form of terragrunt modules and the front-end (or live) with the instantiation of such modules that are being filled in with variables.
The image below depicts the structure of those 2 repos (terragrunt being the back-end and terraform-live the live one as the name implies).
In my terragrunt/aws-vpc/variables.tf, there is the following declaration:
variable "remote_state_bucket" {
description = "The bucket containing the terraform remote state"
}
However, when trying to perform a terragrunt apply in the live directory, i get the following:
var.remote_state_bucket
The bucket containing the terraform remote state
Enter a value:
Here is my terraform-live/environments/staging/terragrunt.hcl
remote_state {
backend = "s3"
config = {
bucket = "my-bucket-staging"
key = "terraform/state/var.env_name/${path_relative_to_include()}"
region = "eu-west-1"
}
}
# Configure root level variables that all resources can inherit
terraform {
extra_arguments "extra_args" {
commands = "${get_terraform_commands_that_need_vars()}"
optional_var_files = [
"${get_terragrunt_dir()}/${find_in_parent_folders("config.tfvars", "ignore")}",
"${get_terragrunt_dir()}/${find_in_parent_folders("secrets.auto.tfvars", "ignore")}",
]
}
}
What is more, the variable seems to be declared in one of the files that terragrunt is instructed to read variables from:
➢ cat terraform-live/environments/staging/config.tfvars
remote_state_bucket = "pkaramol-staging"
Why is terragrunt (or terraform ?) unable to read the specific variable?
➢ terragrunt --version
terragrunt version v0.19.29
➢ terraform --version
Terraform v0.12.4
Because config.tfvars is not in a parent folder :)
find_in_parent_folders looks in parent folders, but not in the current folder. And your config.tfvars is in the same folder as your terragrunt.hcl.
Try using something like:
optional_var_files = [
"${get_terragrunt_dir()}/config.tfvars",
"${get_terragrunt_dir()}/secrets.auto.tfvars",
]

terraform_remote_state data block syntax

I'm working on an AWS multi-account setup with Terraform. I've got a master account that creates several sub-accounts, and in the sub-accounts I'm referencing the master's remote state to retrieve output values.
The terraform plan command is failing for this configuration in a test main.tf:
terraform {
required_version = ">= 0.12.0"
backend "s3" {
bucket = "bucketname"
key = "statekey.tfstate"
region = "us-east-1"
}
}
provider "aws" {
region = "us-east-1"
version = "~> 2.7"
}
data "aws_region" "current" {}
data "terraform_remote_state" "common" {
backend = "s3"
config {
bucket = "anotherbucket"
key = "master.tfstate"
}
}
With the following error:
➜ test terraform plan
Error: Unsupported block type
on main.tf line 20, in data "terraform_remote_state" "common":
20: config {
Blocks of type "config" are not expected here. Did you mean to define argument
"config"? If so, use the equals sign to assign it a value.
From what I can tell from the documentation, this should be working… what am I doing wrong?
➜ test terraform -v
Terraform v0.12.2
+ provider.aws v2.14.0
Seems the related document isn't updated after upgrade to 0.12.x
As the error prompt, add = after config
data "terraform_remote_state" "common" {
backend = "s3"
config = {
bucket = "anotherbucket"
key = "master.tfstate"
}
}
If the problem is fixed, recommend to raise a PR to update the document, then others can avoid the same issue again.

Terragrunt with s3 backend changes during apply

I am trying to use terragrunt to manage AWS infrastructure, the problem what I am facing is about the changing backend. The simplest way to reproduce the problem is
terragrunt init -reconfigure -backend-config="workspace_key_prefix=ujjwal
terragrunt workspace new ujjwal
terragrunt apply
It throws the below error
Backend config has changed from map[region:us-east-1 workspace_key_prefix:ujjwal bucket:distplat-phoenix-live dynamodb_table:df04-phoenix-live encrypt:%!s(bool=true) key:vpc-main/terraform.tfstate] to map[bucket:distplat-phoenix-live key:vpc-main/terraform.tfstate region:us-east-1 encrypt:%!s(bool=true) dynamodb_table:df04-phoenix-live]
Terraform has detected that the configuration specified for the backend
has changed. Terraform will now check for existing state in the backends.
When i say yes to this, I can see in s3 there is a folder created named as env: and the .tfstate file is present there instead of the workspace directory created.
Below is the content of the terraform.tfvars file in the root directory
terragrunt = {
remote_state {
backend = "s3"
config {
bucket = "xxxxxxx"
key = "${path_relative_to_include()}/terraform.tfstate"
region = "us-east-1"
encrypt = true
dynamodb_table = "yyyyyyyyy"
s3_bucket_tags {
owner = "Ujjwal Singh"
name = "Terraform state storage"
}
dynamodb_table_tags {
owner = "Ujjwal"
name = "Terraform lock for vpc"
}
}
}
}
Any help is much appreciated.

How to give a .tf file as input in Terraform Apply command?

I'm a beginner in Terraform.
I have a directory which contains 2 .tf files.
Now I want to run Terraform Apply on a selected .tf file & neglect the other one.
Can I do that? If yes, how? If no, why & what is the best practice?
You can't selectively apply one file and then the other. Two ways of (maybe) achieving what you're going for:
Use the -target flag to target resource(s) in one file and then the other.
Put each file (or more broadly, group of resources, which might be multiple files) in separate "modules" (folders). You can then apply them separately.
You can use the terraform -target flag. Or
You can have multiple terraform modules in a separate directory. And then you can terraform apply there.
As an example, assume you have 3 .tf files separately. But you need to run more than just one of them at the same time. If you also, need to run them more often it's better to have a terraform module.
terraform
|--frontend
| └──main.tf
|--backend-1
| └──main.tf
|--backend-2
| └──main.tf
|--modules-1
| └──module.tf
Inside the module.tf you can define which files you need to apply.
module "frontend" {
source = "terraform/frontend"
}
module "backend-1" {
source = "terraform/backend-1"
}
Then issue terraform apply staying at the module directory. And it will automatically import instances inside those paths and apply it.
Putting each terraform config file into separate directory did the job correctly.
So here, is my structure
├── aws
│   └── aws_terraform.tf
├── trash
│ └── main.tf
All you have to do:
enter each folder
terraform init && terraform plan && terraform apply
enter 'yes' to confirm terraform apply
PS: '-target' key didn't help me out.
Either use a --target option to specify module to run by using below command
terraform apply -target=module.<module_name>
Or another workaround is rename other terraform files with *.tf.disable extension to skip it by loading via Terraform. Currently it's Code loading *.tf files
If you cant have terraform files in different folders like the other answers stated. You can try using my script GitHub repo for script
Which is a script that runs throughout a specific terraform file and outputs adds "-target=" to all modules names.
This is a filler for the data block, as others have already explained resource block
You can also target data block if you're performing a read operation.
Lets say you have two files - create.tf and read.tf
Assuming create.tf is already applied:
resource "hashicups_order" "edu" {
items {
coffee {
id = 3
}
quantity = 3
}
items {
coffee {
id = 2
}
quantity = 1
}
}
output "edu_order" {
value = hashicups_order.edu
}
And you only want to apply read.tf:
data "hashicups_ingredients" "first_coffee" {
coffee_id = hashicups_order.edu.items[0].coffee[0].id
}
output "first_coffee_ingredients" {
value = data.hashicups_ingredients.first_coffee
}
You can create a plan file targeting the read only data block:
terraform plan -target=data.hashicups_ingredients.first_coffee
And similarly, apply the read operation using Terraform:
terraform apply -target=data.hashicups_ingredients.first_coffee -auto-approve
No, unfortunately, Terraform doesn't have the feature to apply a selected .tf file. Terraform applies all .tf files in the same directory.
But you can apply the selected code with comment out and uncomment. For example, you have 2 .tf files "1st.tf" and "2nd.tf" in the same directory to create the resources for GCP(Google Cloud Platform):
Then, "1st.tf" has this code below:
provider "google" {
credentials = file("myCredentials.json")
project = "myproject-113738"
region = "asia-northeast1"
}
resource "google_project_service" "project" {
service = "iam.googleapis.com"
disable_dependent_services = true
}
And "2nd.tf" has this code below:
resource "google_service_account" "service_account_1" {
display_name = "Service Account 1"
account_id = "service-account-1"
}
resource "google_service_account" "service_account_2" {
display_name = "Service Account 2"
account_id = "service-account-2"
}
Now, first, you want to only apply the code in "1st.tf" so you need to comment out the code in "2nd.tf":
1st.tf:
provider "google" {
credentials = file("myCredentials.json")
project = "myproject-113738"
region = "asia-northeast1"
}
resource "google_project_service" "project" {
service = "iam.googleapis.com"
disable_dependent_services = true
}
2nd.tf (Comment Out):
# resource "google_service_account" "service_account_1" {
# display_name = "Service Account 1"
# account_id = "service-account-1"
# }
# resource "google_service_account" "service_account_2" {
# display_name = "Service Account 2"
# account_id = "service-account-2"
# }
Then, you apply:
terraform apply -auto-approve
Next, additionally, you want to apply the code in "2nd.tf" so you need to uncomment the code in "2nd.tf":
1st.tf:
provider "google" {
credentials = file("myCredentials.json")
project = "myproject-113738"
region = "asia-northeast1"
}
resource "google_project_service" "project" {
service = "iam.googleapis.com"
disable_dependent_services = true
}
2nd.tf (Uncomment):
resource "google_service_account" "service_account_1" {
display_name = "Service Account 1"
account_id = "service-account-1"
}
resource "google_service_account" "service_account_2" {
display_name = "Service Account 2"
account_id = "service-account-2"
}
Then, you apply:
terraform apply -auto-approve
This way, you can apply the selected code with comment out and uncomment.
terraform apply -target nginx-docker.tf

Resources