How to do a terraform module of data sources? - terraform

I want to do a module of data sources, but I am unsure how to declare them? The different accounts are gonna use the same, and they are already in place.
the data sources are regarding iam and policies.
I know usually you do :
module "iam" {
source = "folder"
name = "blabla"
... }
Thanks a lot!

You could create an own 'environment' for that. Let's name it general.
If you assign an own backend for it and configure it to use an S3 bucket as remote storage (recommended anyways if you work with multiple contributors), you can work with terraform_remote_state.
Just import the state of general into your environment with
data "terraform_remote_state" "general" {
backend = "s3"
config {
region = "..." # e.g. "eu-central-1"
bucket = "..." # the remote-storage bucket name of 'general'
key = "..." # e.g. "environments/general/terraform.tfstate" (as defined in the 'general' backend!
}
}
You can then access the resources from that state with ami = "${data.terraform_remote_state.general.ami}" if you declared them as output variable:
output "ami" {
description = "The ID of the default EC2 AMI"
value = "${var.ami}"
}
Of course you can also output resource attributes:
output "vpc_id" {
description = "The ID of the created VPC"
value = "${aws_vpc.vpc.id}"
}

Related

How to Output Values From Another File in Terraform

I have two folders with a few files in each folder
services
dns.tf
app
outputs.tf
In the dns.tf I have the following:
resource "cloudflare_record" "pgsql_master_record" {
count = var.pgsql_enabled ? 1 : 0
zone_id = data.cloudflare_zone.this.id
name = "${var.name}.pg.${var.jurisdiction}"
value = module.db[0].primary.ip_address.0.ip_address
type = "A"
ttl = 3600
}
resource "cloudflare_record" "redis_master_record" {
count = var.redis_enabled ? 1 : 0
zone_id = data.cloudflare_zone.this.id
name = "${var.name}.redis.${var.jurisdiction}"
value = module.redis[0].host
type = "A"
ttl = 3600
}
And in my app outputs.tf I'd like to add outputs for the above resources
output "psql_master_record" {
value = cloudflare_record.pgsql_master_record[*].hostname
}
output "redis_master_record" {
value = cloudflare_record.redis_master_record[*].hostname
}
But I keep getting this error:
A managed resource "cloudflare_record" "redis_master_record" has not been declared in the root module.
You can't do it.
Your dns.tf and outputs.tf should be in the same folder
Or as example, you can use data block with remote state
In Terraform, you can output values from a configuration using the output block. These outputs can then be referenced within the same configuration using interpolation syntax, or from another configuration using the terraform_remote_state data source.
Here's an example of how you might use the output block to output the value of an EC2 instance's ID:
resource "aws_instance" "example" {
# ...
}
output "instance_id" {
value = aws_instance.example.id
}
You can then reference the output value within the same configuration using "output.instance_id.value".
To use the output value from another configuration, you'll first need to create a data source for the remote state using the terraform_remote_state data source. Here's an example of how you might do that:
data "terraform_remote_state" "example" {
backend = "s3"
config {
bucket = "my-tf-state-bucket"
key = "path/to/state/file"
region = "us-west-2"
}
}
Then, you can reference the output value from the remote configuration using "data.terraform_remote_state.example.output.instance_id.value".
As far as I know, you have to run terraform per directory. In the same directory you can have multiple terraform files and use variables from file A in file B. You are currently splitting it in 2 directories, that is only possible with a module approach. And this does not work out-of-the-box.
This thread should clarify it.

Terraform assigning outputs to variable or pulling output from stateful and using it

I am working with terraform and trying to output the security group ID in the form of an output and pull it from the local terraform state file and use that information in a different resource in my case it would be a aws_eks_cluster in the vpc_config session.
In the module that has the security group:
output "security_group_id" {
value = aws_security_group.a_group.id
}
In the module that reads the output (the backend config dependends on which backend type you are using and how it is configured):
data "terraform_remote_state" "security_group" {
backend = "s3"
config {
bucket = "your-terraform-state-files"
key = "your-state-file-key.tfstate"
region = "us-east-1"
}
}
locals {
the_security_group_id = data.terraform_remote_state.security_group.outputs.security_group_id
}

Terraform optional provider for optional resource

I have a module where I want to conditionally create an s3 bucket in another region. I tried something like this:
resource "aws_s3_bucket" "backup" {
count = local.has_backup ? 1 : 0
provider = "aws.backup"
bucket = "${var.bucket_name}-backup"
versioning {
enabled = true
}
}
but it appears that I need to provide the aws.backup provider even if count is 0. Is there any way around this?
NOTE: this wouldn't be a problem if I could use a single provider to create buckets in multiple regions, see https://github.com/terraform-providers/terraform-provider-aws/issues/8853
Based on your description, I understand that you want to create resources using the same "profile", but in a different region.
For that case I would take the following approach:
Create a module file for you s3_bucket_backup, in that file you will build your "backup provider" with variables.
# Module file for s3_bucket_backup
provider "aws" {
region = var.region
profile = var.profile
alias = "backup"
}
variable "profile" {
type = string
description = "AWS profile"
}
variable "region" {
type = string
description = "AWS profile"
}
variable "has_backup" {
type = bool
description = "AWS profile"
}
variable "bucket_name" {
type = string
description = "VPC name"
}
resource "aws_s3_bucket" "backup" {
count = var.has_backup ? 1 : 0
provider = aws.backup
bucket = "${var.bucket_name}-backup"
}
In your main tf file, declare your provider profile using local variables, call the module passing the profile and a different region
# Main tf file
provider "aws" {
region = "us-east-1"
profile = local.profile
}
locals {
profile = "default"
has_backup = false
}
module "s3_backup" {
source = "./module"
profile = local.profile
region = "us-east-2"
has_backup = true
bucket_name = "my-bucket-name"
}
And there you have it, you can now build your s3_bucket_backup using the same "profile" with different regions.
In the case above, the region used by the main file is us-east-1 and the bucket is created on us-east-2.
If you set has_backup to false, it won't create anything.
Since the "backup provider" is build inside the module, your code won't look "dirty" for having multiple providers in the main tf file.

How to use remote state data sources within child modules

I am trying to call data from a remote state to reference a vpc_id for a network acl. When I run plan/apply, I receive the error "This object has no argument, nested block, or exported attribute named "vpc_id"."
I've tried using "data.terraform_remote_state.*.vpc_id", as well as "${}" syntax. I tried defining the data.remote info in the variables.tf for the child module, and the parent module.
I ultimately need to be able to call this module for different VPCs/subnets dynamically.
The relevant VPC already exists and all modules are initialized.
s3 bucket stage/network/vpc/terraform.tfstate:
"outputs": {
"vpc_id": {
"value": "vpc-1234567890",
"type": "string"
}
},
enter code here
modules/network/acl/main.tf:
data "terraform_remote_state" "stage-network" {
backend = "s3"
config = {
bucket = "bucket"
key = "stage/network/vpc/terraform.tfstate"
}
}
resource "aws_network_acl" "main" {
vpc_id = data.terraform_remote_state.stage-network.vpc_id
# acl variables here
stage/network/acl/main.tf:
data "terraform_remote_state" "stage-network" {
backend = "s3"
config = {
bucket = "bucket"
key = "stage/network/vpc/terraform.tfstate"
}
}
module "create_acl" {
source = "../../../modules/network/acl/"
vpc_id = var.vpc_id
# vpc_id = data.terraform_remote_state.stage-network.vpc_id
# vpc_id = "${data.terraform_remote_state.stage-network.vpc_id}"
# vpc_id = var.data.terraform_remote_state.stage-network.vpc_id
I am expecting the acl parent module to be able to associate to the VPC, and from there the child module to be able to configure the variables.
This is one of the breaking changes that the 0.12.X version of Terraform introduce.
The terraform_remote_state data source has changed slightly for the v0.12 release to make all of the remote state outputs available as a single map value, rather than as top-level attributes as in previous releases.
In previous releases, a reference to a vpc_id output exported by the remote state data source might have looked like this:
data.terraform_remote_state.vpc.vpc_id
This value must now be accessed via the new outputs attribute:
data.terraform_remote_state.vpc.outputs.vpc_id
Source: https://www.terraform.io/upgrade-guides/0-12.html#remote-state-references
In first state:
.....
output "expose_vpc_id" {
value = "${module.network.vpc_id}"
}
In another state, to share between terraform configs:
data "terraform_remote_state" "remote" {
backend = "s3"
config = {
bucket = "terraform-ex1"
key = "tera-ex1.tfstate"
region = "us-east-1"
}
}
output "vpc_id" {
value = "${data.terraform_remote_state.remote.outputs.expose_vpc_id}"
}

Configuring remote state in terrform seems duplicated?

I am configuring remote state in terraform like:
provider "aws" {
region = "ap-southeast-1"
}
terraform {
backend "s3" {
bucket = "xxx-artifacts"
key = "terraform_state.tfstate"
region = "ap-southeast-1"
}
}
data "terraform_remote_state" "s3_state" {
backend = "s3"
config {
bucket = "xxx-artifacts"
key = "terraform_state.tfstate"
region = "ap-southeast-1"
}
}
It seems very duplicated tho, why is it like that? I have the same variables in terraform block and the terraform_remote_state data source block. Is this actually required?
The terraform.backend configuration is for configuring where to store remote state for the Terraform context/directory where Terraform is being ran from.
This allows you to share state between different machines, backup your state and also co-ordinate between usages of a Terraform context via state locking.
The terraform_remote_state data source is, like other data sources, for retrieving data from an external source, in this case a Terraform state file.
This allows you to retrieve information stored in a state file from another Terraform context and use that elsewhere.
For example in one location you might create an aws_elasticsearch_domain but then need to lookup the endpoint of the domain in another context (such as for configuring where to ship logs to). Currently there isn't a data source for ES domains so you would need to either hardcode the endpoint elsewhere or you could look it up with the terraform_remote_state data source like this:
elasticsearch/main.tf
resource "aws_elasticsearch_domain" "example" {
domain_name = "example"
elasticsearch_version = "1.5"
cluster_config {
instance_type = "r4.large.elasticsearch"
}
snapshot_options {
automated_snapshot_start_hour = 23
}
tags = {
Domain = "TestDomain"
}
}
output "es_endpoint" {
value = "$aws_elasticsearch_domain.example.endpoint}"
}
logstash/userdata.sh.tpl
#!/bin/bash
sed -i 's/|ES_DOMAIN|/${es_domain}/' >> /etc/logstash.conf
logstash/main.tf
data "terraform_remote_state" "elasticsearch" {
backend = "s3"
config {
bucket = "xxx-artifacts"
key = "elasticsearch.tfstate"
region = "ap-southeast-1"
}
}
data "template_file" "logstash_config" {
template = "${file("${path.module}/userdata.sh.tpl")}"
vars {
es_domain = "${data.terraform_remote_state.elasticsearch.es_endpoint}"
}
}
resource "aws_instance" "foo" {
# ...
user_data = "${data.template_file.logstash_config.rendered}"
}

Resources