I am trying to use terragrunt to manage AWS infrastructure, the problem what I am facing is about the changing backend. The simplest way to reproduce the problem is
terragrunt init -reconfigure -backend-config="workspace_key_prefix=ujjwal
terragrunt workspace new ujjwal
terragrunt apply
It throws the below error
Backend config has changed from map[region:us-east-1 workspace_key_prefix:ujjwal bucket:distplat-phoenix-live dynamodb_table:df04-phoenix-live encrypt:%!s(bool=true) key:vpc-main/terraform.tfstate] to map[bucket:distplat-phoenix-live key:vpc-main/terraform.tfstate region:us-east-1 encrypt:%!s(bool=true) dynamodb_table:df04-phoenix-live]
Terraform has detected that the configuration specified for the backend
has changed. Terraform will now check for existing state in the backends.
When i say yes to this, I can see in s3 there is a folder created named as env: and the .tfstate file is present there instead of the workspace directory created.
Below is the content of the terraform.tfvars file in the root directory
terragrunt = {
remote_state {
backend = "s3"
config {
bucket = "xxxxxxx"
key = "${path_relative_to_include()}/terraform.tfstate"
region = "us-east-1"
encrypt = true
dynamodb_table = "yyyyyyyyy"
s3_bucket_tags {
owner = "Ujjwal Singh"
name = "Terraform state storage"
}
dynamodb_table_tags {
owner = "Ujjwal"
name = "Terraform lock for vpc"
}
}
}
}
Any help is much appreciated.
Related
I am creating a few terraform modules and inside the modules I also create the resources for storing remote state ( a S3 bucket and dynamodb table)
when I then use the module I launch I write something like this:
# terraform {
# backend "s3" {
# bucket = "name"
# key = "xxxx.tfstate"
# region = "rrrr"
# encrypt = true
# dynamodb_table = "trrrrr"
# }
# }
terraform {
required_version = ">= 1.0.0, < 2.0.0"
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 4.0"
}
}
}
provider "aws" {
region = var.region
}
module "mymodule" {
source = "./module/mymodule"
region = "param1"
prefix = "param2"
project = "xxxx"
username = "ddd"
contact = "myemail"
table_name = "table-name"
bucket_name = "uniquebucketname"
}
where I leave commented out the part on remote state and I leave terraform to create a local state and create all resources (including the bucket and the DynamoDB table).
After the resources are created
I re-run terraform init and I migrate the state to s3.
I wonder if this is a good practice or if there is something better for maintaining the state and also provide isolation.
That is an interesting approach. I would create the S3 bucket manually since it's a 1 time create for your state file mgmt. Then I would add a policy to prevent deletion | see here: https://serverfault.com/questions/226700/how-do-i-prevent-deletion-of-s3-buckets | & versioning and/or a bkp.
Beyond this approach there are better practises such as using tools like Terraform Cloud which is free for 5 users. Then in your terraform root module configuration you would put this:
terraform {
backend "remote" {
hostname = "app.terraform.io"
organization = "YOUR-TERRAFORM-CLOUD-ORG"
workspaces {
# name = "" ## For single workspace jobs
# prefix = "" ## for multiple workspaces
name = "YOUR-ROOT-MODULE-WORKSPACE-NAME"
}
}
}
More details in this similar Q&A: Initial setup of terraform backend using terraform
I am using Terraform snowflake plugins. I want to use ${terraform.workspace} variable in terraform scope.
terraform {
required_providers {
snowflake = {
source = "chanzuckerberg/snowflake"
version = "0.20.0"
}
}
backend "s3" {
bucket = "data-pf-terraform-backend-${terraform.workspace}"
key = "backend/singlife/landing"
region = "ap-southeast-1"
dynamodb_table = "data-pf-snowflake-terraform-state-lock-${terraform.workspace}"
}
}
But I got this error. Variables are not available in this scope?
Error: Variables not allowed
on provider.tf line 9, in terraform:
9: bucket = "data-pf-terraform-backend-${terraform.workspace}"
Variables may not be used here.
Error: Variables not allowed
on provider.tf line 12, in terraform:
12: dynamodb_table = "data-pf-snowflake-terraform-state-lock-${terraform.workspace}"
Variables may not be used here.
Set backend.tf
terraform {
backend "azurerm" {}
}
Create a file backend.conf
storage_account_name = "deploymanager"
container_name = "terraform"
key = "production.terraform.tfstate"
Run:
terraform init -backend-config=backend.conf
The terraform backend docs state:
A backend block cannot refer to named values (like input variables, locals, or data source attributes).
However, the s3 backend docs show you how you can partition some s3 storage based on the current workspace, so each workspace gets its own independent state file. You just can't specify a distinct bucket for each workspace. You can only specify one bucket for all workspaces, but the s3 backend will add the workspace prefix to the path:
When using a non-default workspace, the state path will be /workspace_key_prefix/workspace_name/key (see also the workspace_key_prefix configuration).
And one dynamo table will suffice for all workspaces. So just use:
backend "s3" {
bucket = "data-pf-terraform-backend"
key = "terraform.tfstate"
region = "ap-southeast-1"
dynamodb_table = "data-pf-snowflake-terraform-state-lock"
}
And switch workspaces as appropriate before deployments.
But how is Jhonny's answer any different? You still cannot put variables in backend.conf, which was the initial question.
Initializing the backend...
╷
│ Error: Variables not allowed
│
│ on backend.conf line 1:
│ 1: bucket = "server-${var.account_id}"
│
│ Variables may not be used here.
The only way for now is to use a wrapper script that provides env variables, unfortunately.
You could checkout terragrunt, which is a thin wrapper that provides extra tools for keeping your configurations DRY, working with multiple Terraform modules, and managing remote state.
See here: https://terragrunt.gruntwork.io/docs/getting-started/quick-start/#keep-your-backend-configuration-dry
Check Jhonny's solution first:
https://stackoverflow.com/a/69664785/132438
(keeping this one for historical reference)
Seems like a specific instance of a more common problem in Terraform: Concatenating variables.
Using locals to concatenate should fix it. See https://www.terraform.io/docs/configuration/locals.html
An example from https://stackoverflow.com/a/61506549/132438:
locals {
BUCKET_NAME = [
"bh.${var.TENANT_NAME}.o365.attachments",
"bh.${var.TENANT_NAME}.o365.eml"
]
}
resource "aws_s3_bucket" "b" {
bucket = "${element(local.BUCKET_NAME, 2)}"
acl = "private"
}
I'm working on a terraform task, where I need to connect two terraform s3 backends. We have a 2 repos for our tf script. The main one is for creating dev/qa/prod envs and the other one is for managing users/policies required for the first script.
We use s3 as the backend and I want to connect both the backend together so they can take ids/names from each other with out hardcoding them.
Say you have a backend A / terraform project A with your ids/names:
terraform {
backend "s3" {
bucket = "mybucket"
key = "path/to/my/key"
region = "us-east-1"
}
}
output "names" {
value = [ "bob", "jim" ]
}
In your other terraform project B you can refer to the above backend A as a data source:
data "terraform_remote_state" "remote_state" {
backend = "s3"
config = {
bucket = "mybucket"
key = "path/to/my/key"
region = "us-east-1"
}
}
Then in the terraform project B you can fetch the outputs of the remote state with names/ids:
data.terraform_remote_state.remote_state.outputs.names
I am trying to use a 2-repo IaC with the so called back-end being in the form of terragrunt modules and the front-end (or live) with the instantiation of such modules that are being filled in with variables.
The image below depicts the structure of those 2 repos (terragrunt being the back-end and terraform-live the live one as the name implies).
In my terragrunt/aws-vpc/variables.tf, there is the following declaration:
variable "remote_state_bucket" {
description = "The bucket containing the terraform remote state"
}
However, when trying to perform a terragrunt apply in the live directory, i get the following:
var.remote_state_bucket
The bucket containing the terraform remote state
Enter a value:
Here is my terraform-live/environments/staging/terragrunt.hcl
remote_state {
backend = "s3"
config = {
bucket = "my-bucket-staging"
key = "terraform/state/var.env_name/${path_relative_to_include()}"
region = "eu-west-1"
}
}
# Configure root level variables that all resources can inherit
terraform {
extra_arguments "extra_args" {
commands = "${get_terraform_commands_that_need_vars()}"
optional_var_files = [
"${get_terragrunt_dir()}/${find_in_parent_folders("config.tfvars", "ignore")}",
"${get_terragrunt_dir()}/${find_in_parent_folders("secrets.auto.tfvars", "ignore")}",
]
}
}
What is more, the variable seems to be declared in one of the files that terragrunt is instructed to read variables from:
➢ cat terraform-live/environments/staging/config.tfvars
remote_state_bucket = "pkaramol-staging"
Why is terragrunt (or terraform ?) unable to read the specific variable?
➢ terragrunt --version
terragrunt version v0.19.29
➢ terraform --version
Terraform v0.12.4
Because config.tfvars is not in a parent folder :)
find_in_parent_folders looks in parent folders, but not in the current folder. And your config.tfvars is in the same folder as your terragrunt.hcl.
Try using something like:
optional_var_files = [
"${get_terragrunt_dir()}/config.tfvars",
"${get_terragrunt_dir()}/secrets.auto.tfvars",
]
According to the documentation, to use s3 and not a local terraform.tfstate file for state storage, one should configure a backend more or less as follows:
terraform {
backend "s3" {
bucket = "my-bucket-name"
key = "my-key-name"
region = "my-region"
}
}
I was
using a local (terraform.tfstate) file
added the above snippet in my provided.tf file
run (again) terraform init
was asked by terraform to migrate my state to the above bucket
...so far so good...
But then comes this confusing part about terraform_remote_state ...
Why do I need this?
Isn't my state now saved remotely (on the aforemenetioned s3 bucket) already?
terraform_remote_state isn't for storage of your state its for retrieval in another terraform plan if you have outputs. It is a data source. For example if you output your Elastic IP Address in one state:
resource "aws_eip" "default" {
vpc = true
}
output "eip_id" {
value = "${aws_eip.default.id}"
}
Then wanted to retrieve that in another state:
data "terraform_remote_state" "remote" {
backend = "s3"
config {
bucket = "my-bucket-name"
key = "my-key-name"
region = "my-region"
}
}
resource "aws_instance" "foo" {
...
}
resource "aws_eip_association" "eip_assoc" {
instance_id = "${aws_instance.foo.id}"
allocation_id = "${data.terraform_remote_state.remote.eip_id}"
}
edit: If you are retrieving outputs in Terraform > 0.12 you need to include outputs
data "terraform_remote_state" "remote" {
backend = "s3"
config {
bucket = "my-bucket-name"
key = "my-key-name"
region = "my-region"
}
}
resource "aws_instance" "foo" {
...
}
resource "aws_eip_association" "eip_assoc" {
instance_id = "${aws_instance.foo.id}"
allocation_id = "${data.terraform_remote_state.remote.outputs.eip_id}"
}
Remote State allows you to collaborate with other team members, and central location to store your infrastructure state.
Apart from that by enabling s3 versioning, you can have versioning for state file, to track changes.