Here is my setup,
Terraform version - Terraform v0.12.17
OS - OSX 10.15.1
Use Case - define a provider file and access the variables defined in the vars file
Files
main.tf - where the code is
provider "aws" {
}
variable "AWS_REGION" {
type = string
}
variable "AMIS" {
type = map(string)
default = {
us-west-1 = "my ami"
}
}
resource "aws_instance" "awsInstall" {
ami = var.AMIS[var.AWS_REGION]
instance_type = "t2.micro"
}
awsVars.tfvars - where the region is defined
AWS_REGION="eu-region-1"
Execution
$ terraform console
var.AWS_REGION
Error: Result depends on values that cannot be determined until after "terraform apply".
What mistake I have done, I don't see any syntax but have issues in accessing the variables, any pointers would be helpful
Thanks
Terraform does not automatically read a .tfvars file unless its filename ends with .auto.tfvars. Because of that, when you ran terraform console with no arguments Terraform did not know a value for variable AWS_REGION.
To keep your existing filename, you can pass this variables file explicitly on the command line like this:
terraform console -var-file="awsVars.tfvars"
Alternatively, you could rename the file to awsVars.auto.tfvars and then Terraform will read it by default as long as it's in the current working directory when you run Terraform.
There's more information on how you can set values for root module input variables in the Terraform documentation section Assigning Values to Root Module Variables.
Note also that the usual naming convention for input variables and other Terraform-specific objects is to keep the names in lowercase and separate words with underscores. For example, it would be more conventional to name your variables aws_region and amis.
Furthermore, if your goal is to find an AMI for the current region (the one chosen by the AWS_DEFAULT_REGION environment variable, or in the provider configuration), you could use the aws_region data source to allow Terraform to determine that automatically, so you don't have to set it as a variable at all:
variable "amis" {
type = map(string)
default = {
us-west-1 = "my ami"
}
}
data "aws_region" "current" {}
resource "aws_instance" "awsInstall" {
ami = var.amis[data.aws_region.current.name]
instance_type = "t2.micro"
}
Related
We want to deploy services into several regions.
Looks like because of the aws provider, we can't just use count or for_each, as the provider can't be interpolated. Thus I need to set this up manually:
resource "aws_instance" "app-us-west-1" {
provider = aws.us-west-1
#other stuff
}
resource "aws_instance" "app-us-east-1" {
provider = aws.us-east-1
#other stuff
}
I would like when running this to create a file which contains all the IPs created (for an ansible inventory).
I was looking at this answer:
https://stackoverflow.com/a/61788089/169252
and trying to adapt it for my case:
resource "local_file" "app-hosts" {
content = templatefile("${path.module}/templates/app_hosts.tpl",
{
hosts = aws_instance[*].public_ip
}
)
filename = "app-hosts.cfg"
}
And then setting up the template accordingly.
But this fails:
Error: Invalid reference
on app.tf line 144, in resource "local_file" "app-hosts":
122: hosts = aws_instance[*].public_ip
A reference to a resource type must be followed by at least one attribute
access, specifying the resource name
I am suspecting that I can't just reference all the aws_instance defined as above like this. Maybe to refer to all aws_instance in this file I need to use a different syntax.
Or maybe I need to use a module somehow. Can someone confirm this?
Using terraform v0.12.24
EDIT: The provider definitions use alias and it's all in the same app.tf, which I was naively assuming to be able to apply in one go with terraform apply (did I mention I am a beginner with terraform?):
provider "aws" {
alias = "us-east-1"
region = "us-east-1"
}
provider "aws" {
alias = "us-west-1"
region = "us-west-1"
}
My current workaround is to not do a join but simply listing them all individually:
{
host1 = aws_instance.app-us-west-1.public_ip
host2 = aws_instance.app-us-east-1.public_ip
# more hosts
}
I have below two environments terraform files
devenv_variables.tfvars
testenv_variables.tfvars
Here is devenv_variables.tfvars
location = "westeurope"
resource_group_name = "devenv-cloudresources-rg"
Here is testenv_variables.tfvars
location = "westeurope"
resource_group_name = "testenv-cloudresources-rg"
Here is my main.tf
# Configure the Microsoft Azure Provider
provider "azurerm" {
version = "=2.0.0"
features {}
subscription_id = "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
}
provider "azurerm" {
alias = "testenv"
subscription_id = "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
}
# Create a resource group
resource "azurerm_resource_group" {
provider = "azurerm.testenv" //How do I pass provider based on variables here?
name = "${var.resource_group_name}"
location = "${var.location}"
}
My requirement is, based on passed tfvar file as parameter, it should choose the subscription.
terraform apply -var-file="devenv_variables.tfvars"
when I type below command resource shall create in test environment
terraform apply -var-file="testenv_variables.tfvars"
I think, I need to define client id, and password to login to respective subscriptions.
tfvars files should only contain the values of variables.
The declaration of variables should happen in regular tf files.
variables.tf
variable "location" {
type = "string"
description = "The azure location where the resources is created"
}
devenv_variables.tfvars
location = "West Europe"
This tutorial can also help you with some more information and examples.
All you have to do this just make same variable name in both files. when you run the command terraform plan–var-file='variable's_file_name' & terraform apply–var-file='variable's_file_name'
on that point terraform will get different value from different file.
for demo
variable.tf (contain all variables)
variable "resource_group" {
type = string
description = "Resource Group"
default = ""
}
dev.tfvars (contain development variable's values)
resource_group="name_of_my_resource_group_dev"
prod.tfvars (contain production variable's values)
resource_group="name_of_my_resource_group_prod"
now when you run command terraform plan–var-file='dev.tfvars' it take the value from dev.tfvars file.
I have created the following terraform.tfvars file:
ec2_image = "ami-00035f41c82244dab"
ec2_instance_type = "t2.micro"
And use it as follows in a main.tf file:
resource "aws_instance" "OneServer" {
ami = "${var.ec2_image}"
instance_type = "${var.ec2_instance_type}"
}
Then I execute the 'terraform plan' command and it complains with:
Error: resource 'aws_instance.OneServer' config: unknown variable
referenced: 'ec2_image'; define it with a 'variable' block
So I changed the main.tf file as follows:
variable "ec2_image" {}
variable "ec2_instance_type" {}
resource "aws_instance" "OneServer" {
ami = "${var.ec2_image}"
instance_type = "${var.ec2_instance_type}"
}
Then the command 'terraform plan' works OK.
I don't understand why these variable blocks are required. What's the point for it?
Are you actually using the -var-file command-line switch?
For example, I don't use that switch - I just define my overridable variables in a random tf file (named in my case variable.tf). If the -var-file switch is not used, that means the variable block is required to tell Terraform when a variable is being defined.
I would like to use the same terraform template for several dev and production environments.
My approach:
As I understand it, the resource name needs to be unique, and terraform stores the state of the resource internally. I therefore tried to use variables for the resource names - but it seems to be not supported. I get an error message:
$ terraform plan
var.env1
Enter a value: abc
Error asking for user input: Error parsing address 'aws_sqs_queue.SqsIntegrationOrderIn${var.env1}': invalid resource address "aws_sqs_queue.SqsIntegrationOrderIn${var.env1}"
My terraform template:
variable "env1" {}
provider "aws" {
region = "ap-southeast-2"
}
resource "aws_sqs_queue" "SqsIntegrationOrderIn${var.env1}" {
name = "Integration_Order_In__${var.env1}"
message_retention_seconds = 86400
receive_wait_time_seconds = 5
}
I think, either my approach is wrong, or the syntax. Any ideas?
You can't interpolate inside the resource name. Instead what you should do is as #BMW have mentioned in the comments, you should make a terraform module that contains that SqsIntegrationOrderIn inside and takes env variable. Then you can use the module twice, and they simply won't clash. You can also have a look at a similar question I answered.
I recommend using a different workspace for each environment. This allows you to specify your configuration like this:
variable "env1" {}
provider "aws" {
region = "ap-southeast-2"
}
resource "aws_sqs_queue" "SqsIntegrationOrderIn" {
name = "Integration_Order_In__${var.env1}"
message_retention_seconds = 86400
receive_wait_time_seconds = 5
}
Make sure to make the name of the "aws_sqs_queue" resource depending on the environment (e.g. by including it in the name) to avoid name conflicts in AWS.
We are trying to create Terraform modules for below activities in AWS, so that we can use them where ever that is required.
VPC creation
Subnets creation
Instance creation etc.
But while creating these modules we have to define the provider in all above listed modules. So we decided to create one more module for provider so that we can call that provider module in other modules (VPC, Subnet, etc.).
Issue in above approach is that it is not taking provider value, and asking for the user input for region.
Terraform configuration is as follow:
$HOME/modules/providers/main.tf
provider "aws" {
region = "${var.region}"
}
$HOME/modules/providers/variables.tf
variable "region" {}
$HOME/modules/vpc/main.tf
module "provider" {
source = "../../modules/providers"
region = "${var.region}"
}
resource "aws_vpc" "vpc" {
cidr_block = "${var.vpc_cidr}"
tags = {
"name" = "${var.environment}_McD_VPC"
}
}
$HOME/modules/vpc/variables.tf
variable "vpc_cidr" {}
variable "environment" {}
variable "region" {}
$HOME/main.tf
module "dev_vpc" {
source = "modules/vpc"
vpc_cidr = "${var.vpc_cidr}"
environment = "${var.environment}"
region = "${var.region}"
}
$HOME/variables.tf
variable "vpc_cidr" {
default = "192.168.0.0/16"
}
variable "environment" {
default = "dev"
}
variable "region" {
default = "ap-south-1"
}
Then when running terraform plan command at $HOME/ location it is not taking provider value and instead asking for the user input for region.
I need help from the Terraform experts, what approach we should follow to address below concerns:
Wrap provider in a Terraform module
Handle multiple region use case using provider module or any other way.
I knew a long time back that it wasn't possible to do this because Terraform built a graph that required a provider for any resource before it included any dependencies and it didn't used to be possible to force a dependency on a module.
However since Terraform 0.8 it is now possible to set a dependency on modules with the following syntax:
module "network" {
# ...
}
resource "aws_instance" "foo" {
# ...
depends_on = ["module.network"]
}
However, if I try that with your setup by changing modules/vpc/main.tf to look something like this:
module "aws_provider" {
source = "../../modules/providers"
region = "${var.region}"
}
resource "aws_vpc" "vpc" {
cidr_block = "${var.vpc_cidr}"
tags = {
"name" = "${var.environment}_McD_VPC"
}
depends_on = ["module.aws_provider"]
}
And run terraform graph | dot -Tpng > graph.png against it it looks like the graph doesn't change at all from when the explicit dependency isn't there.
This seems like it might be a potential bug in the graph building stage in Terraform that should probably be raised as an issue but I don't know the core code base well enough to spot where the change needs to be.
For our usage we use symlinks heavily in our Terraform code base, some of which is historic from before Terraform supported other ways of doing things but could work for you here.
We simply define the provider in a single .tf file (such as environment.tf) along with any other generic config needed for every place you would ever run Terraform (ie not at a module level) and then symlink this into each location. That allows us to define the provider in a single place with overridable variables if necessary.
Step 1
Add region alias in the main.tf file where you gonna execute the terraform plan.
provider "aws" {
region = "eu-west-1"
alias = "main"
}
provider "aws" {
region = "us-east-1"
alias = "useast1"
}
Step 2
Add providers block inside your module definition block
module "lambda_edge_rule" {
providers = {
aws = aws.useast1
}
source = "../../../terraform_modules/lambda"
tags = var.tags
}
Step 3
Define "aws" as providers inside your module. ( source = ../../../terraform_modules/lambda")
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = ">= 2.7.0"
}
}
}
resource "aws_lambda_function" "lambda" {
function_name = "blablabla"
.
.
.
.
.
.
.
}
Note: Terraform version v1.0.5 as of now.