Configuring remote state in terrform seems duplicated? - terraform

I am configuring remote state in terraform like:
provider "aws" {
region = "ap-southeast-1"
}
terraform {
backend "s3" {
bucket = "xxx-artifacts"
key = "terraform_state.tfstate"
region = "ap-southeast-1"
}
}
data "terraform_remote_state" "s3_state" {
backend = "s3"
config {
bucket = "xxx-artifacts"
key = "terraform_state.tfstate"
region = "ap-southeast-1"
}
}
It seems very duplicated tho, why is it like that? I have the same variables in terraform block and the terraform_remote_state data source block. Is this actually required?

The terraform.backend configuration is for configuring where to store remote state for the Terraform context/directory where Terraform is being ran from.
This allows you to share state between different machines, backup your state and also co-ordinate between usages of a Terraform context via state locking.
The terraform_remote_state data source is, like other data sources, for retrieving data from an external source, in this case a Terraform state file.
This allows you to retrieve information stored in a state file from another Terraform context and use that elsewhere.
For example in one location you might create an aws_elasticsearch_domain but then need to lookup the endpoint of the domain in another context (such as for configuring where to ship logs to). Currently there isn't a data source for ES domains so you would need to either hardcode the endpoint elsewhere or you could look it up with the terraform_remote_state data source like this:
elasticsearch/main.tf
resource "aws_elasticsearch_domain" "example" {
domain_name = "example"
elasticsearch_version = "1.5"
cluster_config {
instance_type = "r4.large.elasticsearch"
}
snapshot_options {
automated_snapshot_start_hour = 23
}
tags = {
Domain = "TestDomain"
}
}
output "es_endpoint" {
value = "$aws_elasticsearch_domain.example.endpoint}"
}
logstash/userdata.sh.tpl
#!/bin/bash
sed -i 's/|ES_DOMAIN|/${es_domain}/' >> /etc/logstash.conf
logstash/main.tf
data "terraform_remote_state" "elasticsearch" {
backend = "s3"
config {
bucket = "xxx-artifacts"
key = "elasticsearch.tfstate"
region = "ap-southeast-1"
}
}
data "template_file" "logstash_config" {
template = "${file("${path.module}/userdata.sh.tpl")}"
vars {
es_domain = "${data.terraform_remote_state.elasticsearch.es_endpoint}"
}
}
resource "aws_instance" "foo" {
# ...
user_data = "${data.template_file.logstash_config.rendered}"
}

Related

terraform remote state best practice

I am creating a few terraform modules and inside the modules I also create the resources for storing remote state ( a S3 bucket and dynamodb table)
when I then use the module I launch I write something like this:
# terraform {
# backend "s3" {
# bucket = "name"
# key = "xxxx.tfstate"
# region = "rrrr"
# encrypt = true
# dynamodb_table = "trrrrr"
# }
# }
terraform {
required_version = ">= 1.0.0, < 2.0.0"
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 4.0"
}
}
}
provider "aws" {
region = var.region
}
module "mymodule" {
source = "./module/mymodule"
region = "param1"
prefix = "param2"
project = "xxxx"
username = "ddd"
contact = "myemail"
table_name = "table-name"
bucket_name = "uniquebucketname"
}
where I leave commented out the part on remote state and I leave terraform to create a local state and create all resources (including the bucket and the DynamoDB table).
After the resources are created
I re-run terraform init and I migrate the state to s3.
I wonder if this is a good practice or if there is something better for maintaining the state and also provide isolation.
That is an interesting approach. I would create the S3 bucket manually since it's a 1 time create for your state file mgmt. Then I would add a policy to prevent deletion | see here: https://serverfault.com/questions/226700/how-do-i-prevent-deletion-of-s3-buckets | & versioning and/or a bkp.
Beyond this approach there are better practises such as using tools like Terraform Cloud which is free for 5 users. Then in your terraform root module configuration you would put this:
terraform {
backend "remote" {
hostname = "app.terraform.io"
organization = "YOUR-TERRAFORM-CLOUD-ORG"
workspaces {
# name = "" ## For single workspace jobs
# prefix = "" ## for multiple workspaces
name = "YOUR-ROOT-MODULE-WORKSPACE-NAME"
}
}
}
More details in this similar Q&A: Initial setup of terraform backend using terraform

Terraform assigning outputs to variable or pulling output from stateful and using it

I am working with terraform and trying to output the security group ID in the form of an output and pull it from the local terraform state file and use that information in a different resource in my case it would be a aws_eks_cluster in the vpc_config session.
In the module that has the security group:
output "security_group_id" {
value = aws_security_group.a_group.id
}
In the module that reads the output (the backend config dependends on which backend type you are using and how it is configured):
data "terraform_remote_state" "security_group" {
backend = "s3"
config {
bucket = "your-terraform-state-files"
key = "your-state-file-key.tfstate"
region = "us-east-1"
}
}
locals {
the_security_group_id = data.terraform_remote_state.security_group.outputs.security_group_id
}

Is there a way to retrieve data from another Account in Terraform?

I want to use the resource "data" in Terraform for example for an sns topic but I don't want too look for a resource in the aws-account, for which I'm deploying my other resources. It should look up to my other aws-account (in the same organization) and find resources in there. Is there a way to make this happen?
data "aws_sns_topic" "topic_alarms_data" {
name = "topic_alarms"
}
Define an aws provider with credentials to the remote account:
# Default provider that you use:
provider "aws" {
region = var.context.aws_region
assume_role {
role_arn = format("arn:aws:iam::%s:role/TerraformRole", var.account_id)
}
}
provider "aws" {
alias = "remote"
region = var.context.aws_region
assume_role {
role_arn = format("arn:aws:iam::%s:role/TerraformRole", var.remote_account_id)
}
}
data "aws_sns_topic" "topic_alarms_data" {
provider = aws.remote
name = "topic_alarms"
}
Now the topics are loaded from the second provider.

How to use remote state data sources within child modules

I am trying to call data from a remote state to reference a vpc_id for a network acl. When I run plan/apply, I receive the error "This object has no argument, nested block, or exported attribute named "vpc_id"."
I've tried using "data.terraform_remote_state.*.vpc_id", as well as "${}" syntax. I tried defining the data.remote info in the variables.tf for the child module, and the parent module.
I ultimately need to be able to call this module for different VPCs/subnets dynamically.
The relevant VPC already exists and all modules are initialized.
s3 bucket stage/network/vpc/terraform.tfstate:
"outputs": {
"vpc_id": {
"value": "vpc-1234567890",
"type": "string"
}
},
enter code here
modules/network/acl/main.tf:
data "terraform_remote_state" "stage-network" {
backend = "s3"
config = {
bucket = "bucket"
key = "stage/network/vpc/terraform.tfstate"
}
}
resource "aws_network_acl" "main" {
vpc_id = data.terraform_remote_state.stage-network.vpc_id
# acl variables here
stage/network/acl/main.tf:
data "terraform_remote_state" "stage-network" {
backend = "s3"
config = {
bucket = "bucket"
key = "stage/network/vpc/terraform.tfstate"
}
}
module "create_acl" {
source = "../../../modules/network/acl/"
vpc_id = var.vpc_id
# vpc_id = data.terraform_remote_state.stage-network.vpc_id
# vpc_id = "${data.terraform_remote_state.stage-network.vpc_id}"
# vpc_id = var.data.terraform_remote_state.stage-network.vpc_id
I am expecting the acl parent module to be able to associate to the VPC, and from there the child module to be able to configure the variables.
This is one of the breaking changes that the 0.12.X version of Terraform introduce.
The terraform_remote_state data source has changed slightly for the v0.12 release to make all of the remote state outputs available as a single map value, rather than as top-level attributes as in previous releases.
In previous releases, a reference to a vpc_id output exported by the remote state data source might have looked like this:
data.terraform_remote_state.vpc.vpc_id
This value must now be accessed via the new outputs attribute:
data.terraform_remote_state.vpc.outputs.vpc_id
Source: https://www.terraform.io/upgrade-guides/0-12.html#remote-state-references
In first state:
.....
output "expose_vpc_id" {
value = "${module.network.vpc_id}"
}
In another state, to share between terraform configs:
data "terraform_remote_state" "remote" {
backend = "s3"
config = {
bucket = "terraform-ex1"
key = "tera-ex1.tfstate"
region = "us-east-1"
}
}
output "vpc_id" {
value = "${data.terraform_remote_state.remote.outputs.expose_vpc_id}"
}

Terraform s3 backend vs terraform_remote_state

According to the documentation, to use s3 and not a local terraform.tfstate file for state storage, one should configure a backend more or less as follows:
terraform {
backend "s3" {
bucket = "my-bucket-name"
key = "my-key-name"
region = "my-region"
}
}
I was
using a local (terraform.tfstate) file
added the above snippet in my provided.tf file
run (again) terraform init
was asked by terraform to migrate my state to the above bucket
...so far so good...
But then comes this confusing part about terraform_remote_state ...
Why do I need this?
Isn't my state now saved remotely (on the aforemenetioned s3 bucket) already?
terraform_remote_state isn't for storage of your state its for retrieval in another terraform plan if you have outputs. It is a data source. For example if you output your Elastic IP Address in one state:
resource "aws_eip" "default" {
vpc = true
}
output "eip_id" {
value = "${aws_eip.default.id}"
}
Then wanted to retrieve that in another state:
data "terraform_remote_state" "remote" {
backend = "s3"
config {
bucket = "my-bucket-name"
key = "my-key-name"
region = "my-region"
}
}
resource "aws_instance" "foo" {
...
}
resource "aws_eip_association" "eip_assoc" {
instance_id = "${aws_instance.foo.id}"
allocation_id = "${data.terraform_remote_state.remote.eip_id}"
}
edit: If you are retrieving outputs in Terraform > 0.12 you need to include outputs
data "terraform_remote_state" "remote" {
backend = "s3"
config {
bucket = "my-bucket-name"
key = "my-key-name"
region = "my-region"
}
}
resource "aws_instance" "foo" {
...
}
resource "aws_eip_association" "eip_assoc" {
instance_id = "${aws_instance.foo.id}"
allocation_id = "${data.terraform_remote_state.remote.outputs.eip_id}"
}
Remote State allows you to collaborate with other team members, and central location to store your infrastructure state.
Apart from that by enabling s3 versioning, you can have versioning for state file, to track changes.

Resources