Terraform s3 backend vs terraform_remote_state - terraform

According to the documentation, to use s3 and not a local terraform.tfstate file for state storage, one should configure a backend more or less as follows:
terraform {
backend "s3" {
bucket = "my-bucket-name"
key = "my-key-name"
region = "my-region"
}
}
I was
using a local (terraform.tfstate) file
added the above snippet in my provided.tf file
run (again) terraform init
was asked by terraform to migrate my state to the above bucket
...so far so good...
But then comes this confusing part about terraform_remote_state ...
Why do I need this?
Isn't my state now saved remotely (on the aforemenetioned s3 bucket) already?

terraform_remote_state isn't for storage of your state its for retrieval in another terraform plan if you have outputs. It is a data source. For example if you output your Elastic IP Address in one state:
resource "aws_eip" "default" {
vpc = true
}
output "eip_id" {
value = "${aws_eip.default.id}"
}
Then wanted to retrieve that in another state:
data "terraform_remote_state" "remote" {
backend = "s3"
config {
bucket = "my-bucket-name"
key = "my-key-name"
region = "my-region"
}
}
resource "aws_instance" "foo" {
...
}
resource "aws_eip_association" "eip_assoc" {
instance_id = "${aws_instance.foo.id}"
allocation_id = "${data.terraform_remote_state.remote.eip_id}"
}
edit: If you are retrieving outputs in Terraform > 0.12 you need to include outputs
data "terraform_remote_state" "remote" {
backend = "s3"
config {
bucket = "my-bucket-name"
key = "my-key-name"
region = "my-region"
}
}
resource "aws_instance" "foo" {
...
}
resource "aws_eip_association" "eip_assoc" {
instance_id = "${aws_instance.foo.id}"
allocation_id = "${data.terraform_remote_state.remote.outputs.eip_id}"
}

Remote State allows you to collaborate with other team members, and central location to store your infrastructure state.
Apart from that by enabling s3 versioning, you can have versioning for state file, to track changes.

Related

terraform remote state best practice

I am creating a few terraform modules and inside the modules I also create the resources for storing remote state ( a S3 bucket and dynamodb table)
when I then use the module I launch I write something like this:
# terraform {
# backend "s3" {
# bucket = "name"
# key = "xxxx.tfstate"
# region = "rrrr"
# encrypt = true
# dynamodb_table = "trrrrr"
# }
# }
terraform {
required_version = ">= 1.0.0, < 2.0.0"
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 4.0"
}
}
}
provider "aws" {
region = var.region
}
module "mymodule" {
source = "./module/mymodule"
region = "param1"
prefix = "param2"
project = "xxxx"
username = "ddd"
contact = "myemail"
table_name = "table-name"
bucket_name = "uniquebucketname"
}
where I leave commented out the part on remote state and I leave terraform to create a local state and create all resources (including the bucket and the DynamoDB table).
After the resources are created
I re-run terraform init and I migrate the state to s3.
I wonder if this is a good practice or if there is something better for maintaining the state and also provide isolation.
That is an interesting approach. I would create the S3 bucket manually since it's a 1 time create for your state file mgmt. Then I would add a policy to prevent deletion | see here: https://serverfault.com/questions/226700/how-do-i-prevent-deletion-of-s3-buckets | & versioning and/or a bkp.
Beyond this approach there are better practises such as using tools like Terraform Cloud which is free for 5 users. Then in your terraform root module configuration you would put this:
terraform {
backend "remote" {
hostname = "app.terraform.io"
organization = "YOUR-TERRAFORM-CLOUD-ORG"
workspaces {
# name = "" ## For single workspace jobs
# prefix = "" ## for multiple workspaces
name = "YOUR-ROOT-MODULE-WORKSPACE-NAME"
}
}
}
More details in this similar Q&A: Initial setup of terraform backend using terraform

Terraform assigning outputs to variable or pulling output from stateful and using it

I am working with terraform and trying to output the security group ID in the form of an output and pull it from the local terraform state file and use that information in a different resource in my case it would be a aws_eks_cluster in the vpc_config session.
In the module that has the security group:
output "security_group_id" {
value = aws_security_group.a_group.id
}
In the module that reads the output (the backend config dependends on which backend type you are using and how it is configured):
data "terraform_remote_state" "security_group" {
backend = "s3"
config {
bucket = "your-terraform-state-files"
key = "your-state-file-key.tfstate"
region = "us-east-1"
}
}
locals {
the_security_group_id = data.terraform_remote_state.security_group.outputs.security_group_id
}

connect 2 terraform remote states

I'm working on a terraform task, where I need to connect two terraform s3 backends. We have a 2 repos for our tf script. The main one is for creating dev/qa/prod envs and the other one is for managing users/policies required for the first script.
We use s3 as the backend and I want to connect both the backend together so they can take ids/names from each other with out hardcoding them.
Say you have a backend A / terraform project A with your ids/names:
terraform {
backend "s3" {
bucket = "mybucket"
key = "path/to/my/key"
region = "us-east-1"
}
}
output "names" {
value = [ "bob", "jim" ]
}
In your other terraform project B you can refer to the above backend A as a data source:
data "terraform_remote_state" "remote_state" {
backend = "s3"
config = {
bucket = "mybucket"
key = "path/to/my/key"
region = "us-east-1"
}
}
Then in the terraform project B you can fetch the outputs of the remote state with names/ids:
data.terraform_remote_state.remote_state.outputs.names

Terraform attempts to create the S3 backend again when switching to a new workspace

I am following this excellent guide to terraform. I am currently on the 3rd post exploring the state. Specifically at the point where terraform workspaces are demonstrated.
So, I have the following main.tf:
provider "aws" {
region = "us-east-2"
}
resource "aws_s3_bucket" "terraform_state" {
bucket = "mark-kharitonov-terraform-up-and-running-state"
# Enable versioning so we can see the full revision history of our
# state files
versioning {
enabled = true
}
# Enable server-side encryption by default
server_side_encryption_configuration {
rule {
apply_server_side_encryption_by_default {
sse_algorithm = "AES256"
}
}
}
}
resource "aws_dynamodb_table" "terraform_locks" {
name = "terraform-up-and-running-locks"
billing_mode = "PAY_PER_REQUEST"
hash_key = "LockID"
attribute {
name = "LockID"
type = "S"
}
}
terraform {
backend "s3" {
# Replace this with your bucket name!
bucket = "mark-kharitonov-terraform-up-and-running-state"
key = "workspaces-example/terraform.tfstate"
region = "us-east-2"
# Replace this with your DynamoDB table name!
dynamodb_table = "terraform-up-and-running-locks"
encrypt = true
}
}
output "s3_bucket_arn" {
value = aws_s3_bucket.terraform_state.arn
description = "The ARN of the S3 bucket"
}
output "dynamodb_table_name" {
value = aws_dynamodb_table.terraform_locks.name
description = "The name of the DynamoDB table"
}
resource "aws_instance" "example" {
ami = "ami-0c55b159cbfafe1f0"
instance_type = "t2.micro"
}
And it is all great:
C:\work\terraform [master ≡]> terraform workspace show
default
C:\work\terraform [master ≡]> terraform apply
Acquiring state lock. This may take a few moments...
aws_dynamodb_table.terraform_locks: Refreshing state... [id=terraform-up-and-running-locks]
aws_instance.example: Refreshing state... [id=i-01120238707b3ba8e]
aws_s3_bucket.terraform_state: Refreshing state... [id=mark-kharitonov-terraform-up-and-running-state]
Apply complete! Resources: 0 added, 0 changed, 0 destroyed.
Releasing state lock. This may take a few moments...
Outputs:
dynamodb_table_name = terraform-up-and-running-locks
s3_bucket_arn = arn:aws:s3:::mark-kharitonov-terraform-up-and-running-state
C:\work\terraform [master ≡]>
Now I am trying to follow the guide - create a new workspace and apply the code there:
C:\work\terraform [master ≡]> terraform workspace new example1
Created and switched to workspace "example1"!
You're now on a new, empty workspace. Workspaces isolate their state,
so if you run "terraform plan" Terraform will not see any existing state
for this configuration.
C:\work\terraform [master ≡]> terraform plan
Acquiring state lock. This may take a few moments...
Refreshing Terraform state in-memory prior to plan...
The refreshed state will be used to calculate this plan, but will not be
persisted to local or remote state storage.
------------------------------------------------------------------------
An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
+ create
Terraform will perform the following actions:
# aws_dynamodb_table.terraform_locks will be created
+ resource "aws_dynamodb_table" "terraform_locks" {
...
+ name = "terraform-up-and-running-locks"
...
}
# aws_instance.example will be created
+ resource "aws_instance" "example" {
+ ami = "ami-0c55b159cbfafe1f0"
...
}
# aws_s3_bucket.terraform_state will be created
+ resource "aws_s3_bucket" "terraform_state" {
...
+ bucket = "mark-kharitonov-terraform-up-and-running-state"
...
}
Plan: 3 to add, 0 to change, 0 to destroy.
------------------------------------------------------------------------
Note: You didn't specify an "-out" parameter to save this plan, so Terraform
can't guarantee that exactly these actions will be performed if
"terraform apply" is subsequently run.
Releasing state lock. This may take a few moments...
C:\work\terraform [master ≡]>
And here the problems start. In the guide, the terraform plan command reports that only one resource is going to be created - an EC2 instance. This implies that terraform is going to reuse the same S3 bucket for the backend and the same DynamoDB table for the lock. But in my case, terraform informs me that it would want to create all the 3 resources, including the S3 bucket. Which would definitely fail (already tried).
So, what am I doing wrong? What is missing?
Creating a new workspace is effectively starting from scratch. The guide steps are a bit confusing in this regard but they are creating two plans to achieve the final result. The first creates the state S3 Bucket and the locking DynamoDB table and the second plan contains just the instance they are creating but uses the terraform code block to tell that plan where to store its state.
In your example you are both setting your state location and creating it in the same plan. This means when you create a new workspace its going to attempt to create that state location a second time because this workspace does not know about the other workspace's state.
In the end its important to know that using workspaces creates unique state files per workspace by appending the workspace name to the remote state path. For example if your state location is mark-kharitonov-terraform-up-and-running-state with a path of workspaces-example then you might see the following:
Default state: mark-kharitonov-terraform-up-and-running-state/workspaces-example/default/terraform.tfstate
Other state: mark-kharitonov-terraform-up-and-running-state/workspaces-example/other/terraform.tfstate
EDIT:
To be clear on how to get the guide results. You need to create two separate plans in separate folders (all plans in your working directory will run at the same time). So create a hierarchy like:
plans >
state >
main.tf
instance >
main.tf
Inside your plans/state/main.tf file put your state location content:
provider "aws" {
region = "us-east-2"
}
resource "aws_s3_bucket" "terraform_state" {
bucket = "mark-kharitonov-terraform-up-and-running-state"
# Enable versioning so we can see the full revision history of our
# state files
versioning {
enabled = true
}
# Enable server-side encryption by default
server_side_encryption_configuration {
rule {
apply_server_side_encryption_by_default {
sse_algorithm = "AES256"
}
}
}
}
resource "aws_dynamodb_table" "terraform_locks" {
name = "terraform-up-and-running-locks"
billing_mode = "PAY_PER_REQUEST"
hash_key = "LockID"
attribute {
name = "LockID"
type = "S"
}
}
output "s3_bucket_arn" {
value = aws_s3_bucket.terraform_state.arn
description = "The ARN of the S3 bucket"
}
Then in your plans/instance/main.tf file you can reference the created state location with the terraform block and should only need the following content:
terraform {
backend "s3" {
# Replace this with your bucket name!
bucket = "mark-kharitonov-terraform-up-and-running-state"
key = "workspaces-example/terraform.tfstate"
region = "us-east-2"
# Replace this with your DynamoDB table name!
dynamodb_table = "terraform-up-and-running-locks"
encrypt = true
}
}
resource "aws_instance" "example" {
ami = "ami-0c55b159cbfafe1f0"
instance_type = "t2.micro"
}

Configuring remote state in terrform seems duplicated?

I am configuring remote state in terraform like:
provider "aws" {
region = "ap-southeast-1"
}
terraform {
backend "s3" {
bucket = "xxx-artifacts"
key = "terraform_state.tfstate"
region = "ap-southeast-1"
}
}
data "terraform_remote_state" "s3_state" {
backend = "s3"
config {
bucket = "xxx-artifacts"
key = "terraform_state.tfstate"
region = "ap-southeast-1"
}
}
It seems very duplicated tho, why is it like that? I have the same variables in terraform block and the terraform_remote_state data source block. Is this actually required?
The terraform.backend configuration is for configuring where to store remote state for the Terraform context/directory where Terraform is being ran from.
This allows you to share state between different machines, backup your state and also co-ordinate between usages of a Terraform context via state locking.
The terraform_remote_state data source is, like other data sources, for retrieving data from an external source, in this case a Terraform state file.
This allows you to retrieve information stored in a state file from another Terraform context and use that elsewhere.
For example in one location you might create an aws_elasticsearch_domain but then need to lookup the endpoint of the domain in another context (such as for configuring where to ship logs to). Currently there isn't a data source for ES domains so you would need to either hardcode the endpoint elsewhere or you could look it up with the terraform_remote_state data source like this:
elasticsearch/main.tf
resource "aws_elasticsearch_domain" "example" {
domain_name = "example"
elasticsearch_version = "1.5"
cluster_config {
instance_type = "r4.large.elasticsearch"
}
snapshot_options {
automated_snapshot_start_hour = 23
}
tags = {
Domain = "TestDomain"
}
}
output "es_endpoint" {
value = "$aws_elasticsearch_domain.example.endpoint}"
}
logstash/userdata.sh.tpl
#!/bin/bash
sed -i 's/|ES_DOMAIN|/${es_domain}/' >> /etc/logstash.conf
logstash/main.tf
data "terraform_remote_state" "elasticsearch" {
backend = "s3"
config {
bucket = "xxx-artifacts"
key = "elasticsearch.tfstate"
region = "ap-southeast-1"
}
}
data "template_file" "logstash_config" {
template = "${file("${path.module}/userdata.sh.tpl")}"
vars {
es_domain = "${data.terraform_remote_state.elasticsearch.es_endpoint}"
}
}
resource "aws_instance" "foo" {
# ...
user_data = "${data.template_file.logstash_config.rendered}"
}

Resources