terragrunt not accepting vars files - terraform

I am trying to use a 2-repo IaC with the so called back-end being in the form of terragrunt modules and the front-end (or live) with the instantiation of such modules that are being filled in with variables.
The image below depicts the structure of those 2 repos (terragrunt being the back-end and terraform-live the live one as the name implies).
In my terragrunt/aws-vpc/variables.tf, there is the following declaration:
variable "remote_state_bucket" {
description = "The bucket containing the terraform remote state"
}
However, when trying to perform a terragrunt apply in the live directory, i get the following:
var.remote_state_bucket
The bucket containing the terraform remote state
Enter a value:
Here is my terraform-live/environments/staging/terragrunt.hcl
remote_state {
backend = "s3"
config = {
bucket = "my-bucket-staging"
key = "terraform/state/var.env_name/${path_relative_to_include()}"
region = "eu-west-1"
}
}
# Configure root level variables that all resources can inherit
terraform {
extra_arguments "extra_args" {
commands = "${get_terraform_commands_that_need_vars()}"
optional_var_files = [
"${get_terragrunt_dir()}/${find_in_parent_folders("config.tfvars", "ignore")}",
"${get_terragrunt_dir()}/${find_in_parent_folders("secrets.auto.tfvars", "ignore")}",
]
}
}
What is more, the variable seems to be declared in one of the files that terragrunt is instructed to read variables from:
➢ cat terraform-live/environments/staging/config.tfvars
remote_state_bucket = "pkaramol-staging"
Why is terragrunt (or terraform ?) unable to read the specific variable?
➢ terragrunt --version
terragrunt version v0.19.29
➢ terraform --version
Terraform v0.12.4

Because config.tfvars is not in a parent folder :)
find_in_parent_folders looks in parent folders, but not in the current folder. And your config.tfvars is in the same folder as your terragrunt.hcl.
Try using something like:
optional_var_files = [
"${get_terragrunt_dir()}/config.tfvars",
"${get_terragrunt_dir()}/secrets.auto.tfvars",
]

Related

"Variables may not be used here" during terraform init

I am using Terraform snowflake plugins. I want to use ${terraform.workspace} variable in terraform scope.
terraform {
required_providers {
snowflake = {
source = "chanzuckerberg/snowflake"
version = "0.20.0"
}
}
backend "s3" {
bucket = "data-pf-terraform-backend-${terraform.workspace}"
key = "backend/singlife/landing"
region = "ap-southeast-1"
dynamodb_table = "data-pf-snowflake-terraform-state-lock-${terraform.workspace}"
}
}
But I got this error. Variables are not available in this scope?
Error: Variables not allowed
on provider.tf line 9, in terraform:
9: bucket = "data-pf-terraform-backend-${terraform.workspace}"
Variables may not be used here.
Error: Variables not allowed
on provider.tf line 12, in terraform:
12: dynamodb_table = "data-pf-snowflake-terraform-state-lock-${terraform.workspace}"
Variables may not be used here.
Set backend.tf
terraform {
backend "azurerm" {}
}
Create a file backend.conf
storage_account_name = "deploymanager"
container_name = "terraform"
key = "production.terraform.tfstate"
Run:
terraform init -backend-config=backend.conf
The terraform backend docs state:
A backend block cannot refer to named values (like input variables, locals, or data source attributes).
However, the s3 backend docs show you how you can partition some s3 storage based on the current workspace, so each workspace gets its own independent state file. You just can't specify a distinct bucket for each workspace. You can only specify one bucket for all workspaces, but the s3 backend will add the workspace prefix to the path:
When using a non-default workspace, the state path will be /workspace_key_prefix/workspace_name/key (see also the workspace_key_prefix configuration).
And one dynamo table will suffice for all workspaces. So just use:
backend "s3" {
bucket = "data-pf-terraform-backend"
key = "terraform.tfstate"
region = "ap-southeast-1"
dynamodb_table = "data-pf-snowflake-terraform-state-lock"
}
And switch workspaces as appropriate before deployments.
But how is Jhonny's answer any different? You still cannot put variables in backend.conf, which was the initial question.
Initializing the backend...
╷
│ Error: Variables not allowed
│
│ on backend.conf line 1:
│ 1: bucket = "server-${var.account_id}"
│
│ Variables may not be used here.
The only way for now is to use a wrapper script that provides env variables, unfortunately.
You could checkout terragrunt, which is a thin wrapper that provides extra tools for keeping your configurations DRY, working with multiple Terraform modules, and managing remote state.
See here: https://terragrunt.gruntwork.io/docs/getting-started/quick-start/#keep-your-backend-configuration-dry
Check Jhonny's solution first:
https://stackoverflow.com/a/69664785/132438
(keeping this one for historical reference)
Seems like a specific instance of a more common problem in Terraform: Concatenating variables.
Using locals to concatenate should fix it. See https://www.terraform.io/docs/configuration/locals.html
An example from https://stackoverflow.com/a/61506549/132438:
locals {
BUCKET_NAME = [
"bh.${var.TENANT_NAME}.o365.attachments",
"bh.${var.TENANT_NAME}.o365.eml"
]
}
resource "aws_s3_bucket" "b" {
bucket = "${element(local.BUCKET_NAME, 2)}"
acl = "private"
}

Unable to read variables from Terraform variable file

Here is my setup,
Terraform version - Terraform v0.12.17
OS - OSX 10.15.1
Use Case - define a provider file and access the variables defined in the vars file
Files
main.tf - where the code is
provider "aws" {
}
variable "AWS_REGION" {
type = string
}
variable "AMIS" {
type = map(string)
default = {
us-west-1 = "my ami"
}
}
resource "aws_instance" "awsInstall" {
ami = var.AMIS[var.AWS_REGION]
instance_type = "t2.micro"
}
awsVars.tfvars - where the region is defined
AWS_REGION="eu-region-1"
Execution
$ terraform console
var.AWS_REGION
Error: Result depends on values that cannot be determined until after "terraform apply".
What mistake I have done, I don't see any syntax but have issues in accessing the variables, any pointers would be helpful
Thanks
Terraform does not automatically read a .tfvars file unless its filename ends with .auto.tfvars. Because of that, when you ran terraform console with no arguments Terraform did not know a value for variable AWS_REGION.
To keep your existing filename, you can pass this variables file explicitly on the command line like this:
terraform console -var-file="awsVars.tfvars"
Alternatively, you could rename the file to awsVars.auto.tfvars and then Terraform will read it by default as long as it's in the current working directory when you run Terraform.
There's more information on how you can set values for root module input variables in the Terraform documentation section Assigning Values to Root Module Variables.
Note also that the usual naming convention for input variables and other Terraform-specific objects is to keep the names in lowercase and separate words with underscores. For example, it would be more conventional to name your variables aws_region and amis.
Furthermore, if your goal is to find an AMI for the current region (the one chosen by the AWS_DEFAULT_REGION environment variable, or in the provider configuration), you could use the aws_region data source to allow Terraform to determine that automatically, so you don't have to set it as a variable at all:
variable "amis" {
type = map(string)
default = {
us-west-1 = "my ami"
}
}
data "aws_region" "current" {}
resource "aws_instance" "awsInstall" {
ami = var.amis[data.aws_region.current.name]
instance_type = "t2.micro"
}

Dynamic provisioners from variables in module

I've just learned some basics about Terraform and created simple reusable Terraform module with AWS EC2 isntance and some additional resources (security groups, ellastic ip, ssh key-pairs etc), it's available on https://github.com/g4s8/docker-worker
Now I want to copy local files to EC2 instance on create (I gues using file provisioner). Is it possible to define files variable like this:
module "worker" {
# ...
files = [
{
src: "./file1",
dst: "/home/ec2-user/file1"
},
{
src: "./file2",
dst: "/home/ec2-user/file2"
}
]
}
and create dynamic provisioners based on these variables:
resource "aws_instance" "worker" {
# pseudocode:
# for (file : var.files) {
provisioner "file" {
source = file.source
destination = file.destination
}
# }
}
So my question: is it possible to dynamically generate provisioners in module, if yes how can I implement iterating over all items of files to create it? I didn't find some references in input variable documentation about it.

Terragrunt with s3 backend changes during apply

I am trying to use terragrunt to manage AWS infrastructure, the problem what I am facing is about the changing backend. The simplest way to reproduce the problem is
terragrunt init -reconfigure -backend-config="workspace_key_prefix=ujjwal
terragrunt workspace new ujjwal
terragrunt apply
It throws the below error
Backend config has changed from map[region:us-east-1 workspace_key_prefix:ujjwal bucket:distplat-phoenix-live dynamodb_table:df04-phoenix-live encrypt:%!s(bool=true) key:vpc-main/terraform.tfstate] to map[bucket:distplat-phoenix-live key:vpc-main/terraform.tfstate region:us-east-1 encrypt:%!s(bool=true) dynamodb_table:df04-phoenix-live]
Terraform has detected that the configuration specified for the backend
has changed. Terraform will now check for existing state in the backends.
When i say yes to this, I can see in s3 there is a folder created named as env: and the .tfstate file is present there instead of the workspace directory created.
Below is the content of the terraform.tfvars file in the root directory
terragrunt = {
remote_state {
backend = "s3"
config {
bucket = "xxxxxxx"
key = "${path_relative_to_include()}/terraform.tfstate"
region = "us-east-1"
encrypt = true
dynamodb_table = "yyyyyyyyy"
s3_bucket_tags {
owner = "Ujjwal Singh"
name = "Terraform state storage"
}
dynamodb_table_tags {
owner = "Ujjwal"
name = "Terraform lock for vpc"
}
}
}
}
Any help is much appreciated.

How to give a .tf file as input in Terraform Apply command?

I'm a beginner in Terraform.
I have a directory which contains 2 .tf files.
Now I want to run Terraform Apply on a selected .tf file & neglect the other one.
Can I do that? If yes, how? If no, why & what is the best practice?
You can't selectively apply one file and then the other. Two ways of (maybe) achieving what you're going for:
Use the -target flag to target resource(s) in one file and then the other.
Put each file (or more broadly, group of resources, which might be multiple files) in separate "modules" (folders). You can then apply them separately.
You can use the terraform -target flag. Or
You can have multiple terraform modules in a separate directory. And then you can terraform apply there.
As an example, assume you have 3 .tf files separately. But you need to run more than just one of them at the same time. If you also, need to run them more often it's better to have a terraform module.
terraform
|--frontend
| └──main.tf
|--backend-1
| └──main.tf
|--backend-2
| └──main.tf
|--modules-1
| └──module.tf
Inside the module.tf you can define which files you need to apply.
module "frontend" {
source = "terraform/frontend"
}
module "backend-1" {
source = "terraform/backend-1"
}
Then issue terraform apply staying at the module directory. And it will automatically import instances inside those paths and apply it.
Putting each terraform config file into separate directory did the job correctly.
So here, is my structure
├── aws
│   └── aws_terraform.tf
├── trash
│ └── main.tf
All you have to do:
enter each folder
terraform init && terraform plan && terraform apply
enter 'yes' to confirm terraform apply
PS: '-target' key didn't help me out.
Either use a --target option to specify module to run by using below command
terraform apply -target=module.<module_name>
Or another workaround is rename other terraform files with *.tf.disable extension to skip it by loading via Terraform. Currently it's Code loading *.tf files
If you cant have terraform files in different folders like the other answers stated. You can try using my script GitHub repo for script
Which is a script that runs throughout a specific terraform file and outputs adds "-target=" to all modules names.
This is a filler for the data block, as others have already explained resource block
You can also target data block if you're performing a read operation.
Lets say you have two files - create.tf and read.tf
Assuming create.tf is already applied:
resource "hashicups_order" "edu" {
items {
coffee {
id = 3
}
quantity = 3
}
items {
coffee {
id = 2
}
quantity = 1
}
}
output "edu_order" {
value = hashicups_order.edu
}
And you only want to apply read.tf:
data "hashicups_ingredients" "first_coffee" {
coffee_id = hashicups_order.edu.items[0].coffee[0].id
}
output "first_coffee_ingredients" {
value = data.hashicups_ingredients.first_coffee
}
You can create a plan file targeting the read only data block:
terraform plan -target=data.hashicups_ingredients.first_coffee
And similarly, apply the read operation using Terraform:
terraform apply -target=data.hashicups_ingredients.first_coffee -auto-approve
No, unfortunately, Terraform doesn't have the feature to apply a selected .tf file. Terraform applies all .tf files in the same directory.
But you can apply the selected code with comment out and uncomment. For example, you have 2 .tf files "1st.tf" and "2nd.tf" in the same directory to create the resources for GCP(Google Cloud Platform):
Then, "1st.tf" has this code below:
provider "google" {
credentials = file("myCredentials.json")
project = "myproject-113738"
region = "asia-northeast1"
}
resource "google_project_service" "project" {
service = "iam.googleapis.com"
disable_dependent_services = true
}
And "2nd.tf" has this code below:
resource "google_service_account" "service_account_1" {
display_name = "Service Account 1"
account_id = "service-account-1"
}
resource "google_service_account" "service_account_2" {
display_name = "Service Account 2"
account_id = "service-account-2"
}
Now, first, you want to only apply the code in "1st.tf" so you need to comment out the code in "2nd.tf":
1st.tf:
provider "google" {
credentials = file("myCredentials.json")
project = "myproject-113738"
region = "asia-northeast1"
}
resource "google_project_service" "project" {
service = "iam.googleapis.com"
disable_dependent_services = true
}
2nd.tf (Comment Out):
# resource "google_service_account" "service_account_1" {
# display_name = "Service Account 1"
# account_id = "service-account-1"
# }
# resource "google_service_account" "service_account_2" {
# display_name = "Service Account 2"
# account_id = "service-account-2"
# }
Then, you apply:
terraform apply -auto-approve
Next, additionally, you want to apply the code in "2nd.tf" so you need to uncomment the code in "2nd.tf":
1st.tf:
provider "google" {
credentials = file("myCredentials.json")
project = "myproject-113738"
region = "asia-northeast1"
}
resource "google_project_service" "project" {
service = "iam.googleapis.com"
disable_dependent_services = true
}
2nd.tf (Uncomment):
resource "google_service_account" "service_account_1" {
display_name = "Service Account 1"
account_id = "service-account-1"
}
resource "google_service_account" "service_account_2" {
display_name = "Service Account 2"
account_id = "service-account-2"
}
Then, you apply:
terraform apply -auto-approve
This way, you can apply the selected code with comment out and uncomment.
terraform apply -target nginx-docker.tf

Resources