Can I use variables in the TerraForm main.tf file? - terraform

Ok, so I have three .tf-files: main.tf where I state azure as provider, resources.tf where all the my resources are claimed, and variables.tf.
I use variables.tf to store keys used by resources.tf.
However, I want to use variables stored in my variable file to fill in the fields in the backend scope like this:
main.tf:
provider "azurerm" {
version = "=1.5.0"
}
terraform {
backend "azurerm" {
storage_account_name = "${var.sa_name}"
container_name = "${var.c_name}"
key = "${var.key}"
access_key = "${var.access_key}"
}
}
Variables stored in variables.tf like this:
variable "sa_name" {
default = "myStorageAccount"
}
variable "c_name" {
default = "tfstate"
}
variable "key" {
default = "codelab.microsoft.tfstate"
}
variable "access_key" {
default = "weoghwoep489ug40gu ... "
}
I got this when running terraform init:
terraform.backend: configuration cannot contain interpolations
The backend configuration is loaded by Terraform extremely early,
before the core of Terraform can be initialized. This is necessary
because the backend dictates the behavior of that core. The core is
what handles interpolation processing. Because of this, interpolations
cannot be used in backend configuration.
If you'd like to parameterize backend configuration, we recommend
using partial configuration with the "-backend-config" flag to
"terraform init".
Is there a way of solving this? I really want all my keys/secrets in the same file... and not one key in the main which I preferably want to push to git.

Terraform doesn't care much about filenames: it just loads all .tf files in the current directory and processes them. Names like main.tf, variables.tf, and outputs.tf are useful conventions to make it easier for developers to navigate the code, but they won't have much impact on Terraform's behavior.
The reason you're seeing the error is that you're trying to use variables in a backend configuration. Unfortunately, Terraform does not allow any interpolation (any ${...}) in backends. Quoting from the documentation:
Only one backend may be specified and the configuration may not contain interpolations. Terraform will validate this.
So, you have to either hard-code all the values in your backend, or provide a partial configuration and fill in the rest of the configuration via CLI params using an outside tool (e.g., Terragrunt).

There are some important limitations on backend configuration:
A configuration can only provide one backend block.
A backend block cannot refer to named values (like input variables, locals, or data source attributes).
Terraform backends configurations one can see at below link:
https://www.terraform.io/docs/configuration/backend.html

Related

How to point in Terraform which Cloudfront to use?

As an example:
I am deploying Terraform module in us-east-1 which will build the infrastructure + cloudfront distribution. Now the same module will be deployed in us-west-1 as part of the Disaster Recovery region. Now since cloudfront it is a global service, how I can point in Terraform module which will be deployed in us-west-1 to use the existing cloudfront?
If these are two modules within the same configuration (i.e. both modules are applied with the same terraform apply command), then you simply pass the aws_cloudfront_distribution resource out of the module where it's created as an output, and pass it into the other module as an input parameter. E.g.:
module1/main.tf
resource "aws_cloudfront_distribution" "mydist" {
...
}
output "mydist" {
value = aws_cloudfront_distribution.mydist
}
module2/main.tf
variable "mydist" {}
main.tf
module "mod1" {
...
}
module "mod2" {
...
mydist = mod1.mydist
}
And now you can access the CloudFront distribution resource from within module2 by using var.mydist.
If these are two modules in entirely separate Terraform configurations, you can either:
Use the aws_cloudfront_distribution data source to get the details about a distribution that was created in a separate configuration
output the distribution from the configuration where it's created, then use the terraform_remote_state data source to retrieve the output from the remote state file.

Input variable for terraform provider version

In a CI/CD context, I would like to define provider versions outside my terraform configuration using TF_VAR_ environment variables.
I'm trying to use input variable to set the version of helm provider in versions.tf (terraform 0.12) but it seems not allowed :
Error: Invalid provider_requirements syntax
on versions.tf line 3, in terraform:
3: helm = "${var.helm_version}"
provider_requirements entries must be strings or objects.
Error: Variables not allowed
on versions.tf line 3, in terraform:
3: helm = "${var.helm_version}"
Variables may not be used here.
How can I configure this ?
If it's not possible, how I can manage the terraform provider version outside my configuration ?
Cannot be done. I wish it could be done. terraform init resolves and downloads the providers, you won't have access to variables at that point.
Each terraform block can contain a number of settings related to
Terraform's behavior. Within a terraform block, only constant values
can be used; arguments may not refer to named objects such as
resources, input variables, etc, and may not use any of the Terraform
language built-in functions.
https://www.terraform.io/docs/configuration/terraform.html
As #thekbb says, it's not possible to get access to version variable during terraform init at least in 0.12.20. However, I've below workaround to manage providers outside your configuration.
You could use alias with provider configuration to achieve this. Let's assume you want 1.3.0 version of helm. Rather than passing it as a var, you could define it statically with an alias like below.
provider "helm" {
alias = "helm-stable"
version = "1.3.0" (the version you pass via TF_VAR_helm_version)
kubernetes {
host = "https://104.196.242.174"
username = "ClusterMaster"
password = "MindTheGap"
client_certificate = file("~/.kube/client-cert.pem")
client_key = file("~/.kube/client-key.pem")
cluster_ca_certificate = file("~/.kube/cluster-ca-cert.pem")
}
}
Then, in your resource or data providers, you could point to a particular provider like below::
data "some_ds" "example" {
name = "dummy"
provider = helm.helm-stable
}
For more details, refer to the below links::
providers
allow variable in provider field

terraform interpolation with variables returning error [duplicate]

# Using a single workspace:
terraform {
backend "remote" {
hostname = "app.terraform.io"
organization = "company"
workspaces {
name = "my-app-prod"
}
}
}
For Terraform remote backend, would there be a way to use variable to specify the organization / workspace name instead of the hardcoded values there?
The Terraform documentation
didn't seem to mention anything related either.
The backend configuration documentation goes into this in some detail. The main point to note is this:
Only one backend may be specified and the configuration may not contain interpolations. Terraform will validate this.
If you want to make this easily configurable then you can use partial configuration for the static parts (eg the type of backend such as S3) and then provide config at run time interactively, via environment variables or via command line flags.
I personally wrap Terraform actions in a small shell script that runs terraform init with command line flags that uses an appropriate S3 bucket (eg a different one for each project and AWS account) and makes sure the state file location matches the path to the directory I am working on.
I had the same problems and was very disappointed with the need of additional init/wrapper scripts. Some time ago I started to use Terragrunt.
It's worth taking a look at Terragrunt because it closes the gap between Terraform and the lack of using variables at some points, e.g. for the remote backend configuration:
https://terragrunt.gruntwork.io/docs/getting-started/quick-start/#keep-your-backend-configuration-dry

Use variable in Terraform remote backend

# Using a single workspace:
terraform {
backend "remote" {
hostname = "app.terraform.io"
organization = "company"
workspaces {
name = "my-app-prod"
}
}
}
For Terraform remote backend, would there be a way to use variable to specify the organization / workspace name instead of the hardcoded values there?
The Terraform documentation
didn't seem to mention anything related either.
The backend configuration documentation goes into this in some detail. The main point to note is this:
Only one backend may be specified and the configuration may not contain interpolations. Terraform will validate this.
If you want to make this easily configurable then you can use partial configuration for the static parts (eg the type of backend such as S3) and then provide config at run time interactively, via environment variables or via command line flags.
I personally wrap Terraform actions in a small shell script that runs terraform init with command line flags that uses an appropriate S3 bucket (eg a different one for each project and AWS account) and makes sure the state file location matches the path to the directory I am working on.
I had the same problems and was very disappointed with the need of additional init/wrapper scripts. Some time ago I started to use Terragrunt.
It's worth taking a look at Terragrunt because it closes the gap between Terraform and the lack of using variables at some points, e.g. for the remote backend configuration:
https://terragrunt.gruntwork.io/docs/getting-started/quick-start/#keep-your-backend-configuration-dry

How to manage terraform for multiple repos

I have 2 repos for my project. A Static website and server. I want the website to be hosted by cloudfront and s3 and the server on elasticbeanstalk. I know these resources will need to know about a route53 resource at least to be under the same domain name for cors to work. Among other things such as vpcs and stuff.
So my question is how do I manage terraform with multiple repos.
I'm thinking I could have a seperate infrastructure repo that builds for all repos.
I could also have them seperate and pass in the arns/names/ids as variables (annoying).
You can use terraform remote_state for this. It lets you read the output variables from another terraform state file.
Lets assume you save your state files remotely on s3 and you have your website.tfstate and server.tfstate file. You could output your hosted zone ID of your route53 zone as hosted_zone_id in your website.tfstate and then reference that output variable directly in your server state terraform code.
data "terraform_remote_state" "website" {
backend = "s3"
config {
bucket = "<website_state_bucket>"
region = "<website_bucket_region>"
key = "website.tfstate"
}
}
resource "aws_route53_record" "www" {
zone_id = "${data.terraform_remote_state.website.hosted_zone_id}"
name = "www.example.com"
type = "A"
ttl = "300"
records = ["${aws_eip.lb.public_ip}"]
}
Note, that you can only read output variables from remote states. You cannot access resources directly, as terraform treats other states/modules as black boxes.
Update
As mentioned in the comments, terraform_remote_state is a simple way to share explicitly published variables across multiple states. However, it comes with 2 issues:
Close coupling between code components, i.e., producer of the variable cannot change easily.
It can only be used by terraform, i.e., you cannot easily share those variables across different layers. Configuration tools such as Ansible cannot use .tfstate natively without some additional custom plugin/wrapper.
The recommended HashiCorp way is to use a central config store such as Consul. It comes with more benefits:
Consumer is decoupled from the variable producer.
Explicit publishing of variables (like in terraform_remote_state).
Can be used by other tools.
A more detailed explanation can be found here.
An approach I've used in the past is to have a single repo for all of the Infrastructure.
An alternative is to have 2 separate tf configurations, each using remote state. Config 1 can use output variables to store any arns/ids as necessary.
Config 2 can then have a remote_state data source to query for the relevant arns/ids.
E.g.
# Declare remote state
data "terraform_remote_state" "network" {
backend = "s3"
config {
bucket = "my-terraform-state"
key = "network/terraform.tfstate"
region = "us-east-1"
}
}
You can then use output values using standard interpolation syntax
${data.terraform_remote_state.network.some_id}

Resources