Terraform use case to create multiple almost identical copies of infrastructure - terraform

I have TF templates whose purpose is to create multiple copies of the same cloud infrastructure. For example you have multiple business units inside a big organization, and you want to build out the same basic networks. Or you want an easy way for a developer to spin up the stack that he's working on. The only difference between "tf apply" invokations is the variable BUSINESS_UNIT, for example, which is passed in as an environment variable.
Is anyone else using a system like this, and if so, how do you manage the state files ?

You should use a Terraform Module. Creating a module is nothing special: just put any Terraform templates in a folder. What makes a module special is how you use it.
Let's say you put the Terraform code for your infrastructure in the folder /terraform/modules/common-infra. Then, in the templates that actually define your live infrastructure (e.g. /terraform/live/business-units/main.tf), you could use the module as follows:
module "business-unit-a" {
source = "/terraform/modules/common-infra"
}
To create the infrastructure for multiple business units, you could use the same module multiple times:
module "business-unit-a" {
source = "/terraform/modules/common-infra"
}
module "business-unit-b" {
source = "/terraform/modules/common-infra"
}
module "business-unit-c" {
source = "/terraform/modules/common-infra"
}
If each business unit needs to customize some parameters, then all you need to do is define an input variable in the module (e.g. under /terraform/modules/common-infra/vars.tf):
variable "business_unit_name" {
description = "The name of the business unit"
}
Now you can set this variable to a different value each time you use the module:
module "business-unit-a" {
source = "/terraform/modules/common-infra"
business_unit_name = "a"
}
module "business-unit-b" {
source = "/terraform/modules/common-infra"
business_unit_name = "b"
}
module "business-unit-c" {
source = "/terraform/modules/common-infra"
business_unit_name = "c"
}
For more information, see How to create reusable infrastructure with Terraform modules and Terraform: Up & Running.

There's two ways of doing this that jump to mind.
Firstly, you could go down the route of using the same Terraform configuration folder that you apply and simply pass in a variable when running Terraform (either via the command line or through environment variables). You'd also want to have the same wrapper script that calls Terraform to configure your state settings to make them differ.
This might end up with something like this:
variable "BUSINESS_UNIT" {}
variable "ami" { default = "ami-123456" }
resource "aws_instance" "web" {
ami = "${var.ami}"
instance_type = "t2.micro"
tags {
Name = "web"
Business_Unit = "${var.BUSINESS_UNIT}"
}
}
resource "aws_db_instance" "default" {
allocated_storage = 10
engine = "mysql"
engine_version = "5.6.17"
instance_class = "db.t2.micro"
name = "${var.BUSINESS_UNIT}"
username = "foo"
password = "bar"
db_subnet_group_name = "db_subnet_group"
parameter_group_name = "default.mysql5.6"
}
Which creates an EC2 instance and an RDS instance. You would then call that with something like this:
#!/bin/bash
if [ "$#" -ne 1 ]; then
echo "Illegal number of parameters - specify business unit as positional parameter"
fi
business_unit=$1
terraform remote config -backend="s3" \
-backend-config="bucket=${business_unit}" \
-backend-config="key=state"
terraform remote pull
terraform apply -var 'BUSINESS_UNIT=${business_unit}'
terraform remote push
As an alternative route you might want to consider using modules to wrap your Terraform configuration.
So instead you might have something that now looks like:
web-instance/main.tf
variable "BUSINESS_UNIT" {}
variable "ami" { default = "ami-123456" }
resource "aws_instance" "web" {
ami = "${var.ami}"
instance_type = "t2.micro"
tags {
Name = "web"
Business_Unit = "${var.BUSINESS_UNIT}"
}
}
db-instance/main.tf
variable "BUSINESS_UNIT" {}
resource "aws_db_instance" "default" {
allocated_storage = 10
engine = "mysql"
engine_version = "5.6.17"
instance_class = "db.t2.micro"
name = "${var.BUSINESS_UNIT}"
username = "foo"
password = "bar"
db_subnet_group_name = "db_subnet_group"
parameter_group_name = "default.mysql5.6"
}
And then you might have different folders that call these modules per business unit:
business-unit-1/main.tf
variable "BUSINESS_UNIT" { default = "business-unit-1" }
module "web_instance" {
source = "../web-instance"
BUSINESS_UNIT = "${var.BUSINESS_UNIT}"
}
module "db_instance" {
source = "../db-instance"
BUSINESS_UNIT = "${var.BUSINESS_UNIT}"
}
and
business-unit-2/main.tf
variable "BUSINESS_UNIT" { default = "business-unit-2" }
module "web_instance" {
source = "../web-instance"
BUSINESS_UNIT = "${var.BUSINESS_UNIT}"
}
module "db_instance" {
source = "../db-instance"
BUSINESS_UNIT = "${var.BUSINESS_UNIT}"
}
You still need a wrapper script to manage state configuration as before but going this route enables you to provide a rough template in your modules and then hard code certain extra configuration by business unit such as the instance size or the number of instances that are built for them.

This is rather popular use case. To archive this you can let developers to pass variable from command-line or from tfvars file into resource to make different resources unique:
main.tf:
resource "aws_db_instance" "db" {
identifier = "${var.BUSINESS_UNIT}"
# ... read more in docs
}
$ terraform apply -var 'BUSINESS_UNIT=unit_name'
PS: We do this often to provision infrastructure for specific git branch name, and since all resources are identifiable and are located in separate tfstate files, we can safely destroy them when we don't need them.

Related

Trying to use nested Terraform modules from a private registry

I have an issue with Terraform and modules that call modules. Whenever I call my nested module the only output I get is:
No changes. Your infrastructure matches the configuration.
Terraform has compared your real infrastructure against your configuration
and found no differences, so no changes are needed.
For context here is what I'm trying to do:
I have a simple module that creates an EC2 instance with certain options. (This module creates an aws_instance resource.)
I have a second module that calls my first and creates three of the above EC2 instances and sets a unique name for each node.
I have a Terraform project that simply calls the second module.
The two above modules are saved in my private GitLab registry. If I created a Terraform project that calls just the first module, it will create the EC2 instance. If I execute the second module as if it were just a normal project three EC2 instances are created. My issue is when I call the second module which then should call the first, but that it is where it falls apart.
I'm wondering if this configuration is actually supported. The use case I'm trying to test is standing up a Kubernetes (K8s) cluster. I create a module that defines up compute resource. I then create a second module that defines the compute modules with the options needed for a K8s cluster. Finally, I have a project that defines how many K8s nodes needed and in which region/availability zones.
I call this nested modules, but in all of my searching nested modules seems to only refer to modules and sub-modules that live inside of the Terraform project that is calling them.
Any help here would be greatly appreciated. Here is how I create the resources and call the modules. These are simple examples I'm using for testing and not anything like the K8s modules I'll be creating. I'm just trying to figure out if I can get nested private registry modules working.
First Module (compute)
resource "aws_instance" "poc-instance" {
ami = var.ol8_ami
key_name = var.key_name
monitoring = true
vpc_security_group_ids = [
data.aws_security_group.internal-ssh-only.id,
]
root_block_device {
delete_on_termination = true
encrypted = true
volume_type = var.ebs_volume_type
volume_size = var.ebs_volume_size
}
availability_zone = var.availability_zone
subnet_id = var.subnet_id
instance_type = var.instance_type
tags = var.tags
metadata_options {
http_tokens = "required"
http_endpoint = "disabled"
}
}
Second Module (nested-cluster)
module "ec2_instance" {
source = "gitlab.example.com/terraform/compute/aws"
for_each = {
0 = "01"
1 = "02"
2 = "03"
}
tags = {
"Name" = "poc-nested_ec2${each.value}"
"service" = "terraform test"
}
}
Terraform Project
module "sample_cluster" {
source = "gitlab.example.com/terraform/nested-cluster/aws"
}

Terraform Remote State - use variables which are not present during plan/apply

I have a question regarding terraform remote states. Maybe I am using the remote state wrong or there is another possible solution:
In my scripts I create a db instance. The created endpoint, port etc. should be saved in the remote state. The DB scripts are outsourced in a module.
I want to reuse the endpoint and port values and pass them to the docker container environment:
environment = [
{
name: "SPRING_DATASOURCE_URL",
value: "jdbc:postgresql://${data.terraform_remote_state.foo.outputs.db_endpoint}:${data.terraform_remote_state.foo.outputs.db_port}"
}
These scripts are also outsourced into a separate module.
On each first run terraform states that these values are not present. Therefor I have to outcomment the environment values, run terraform apply and after all was created, I have to rerun terraform apply - now with the values for the environment.
Is there another possible (and better solution) to pass the created db values to the service and task which contains the docker container environment?
Edit 21-05-2021
I suggested #smsnheck that if a module block is used, one can reference output variable via module.module_name.output_name. E.g:
module/main.tf
resource "aws_db_instance" "example" {
// Arguments taken from example
allocated_storage = 10
engine = "mysql"
engine_version = "5.7"
instance_class = "db.t3.micro"
name = "mydb"
username = "foo"
password = "foobarbaz"
parameter_group_name = "default.mysql5.7"
skip_final_snapshot = true
}
output "db_host" {
value = aws_db_instance.example.address
}
output "db_port" {
value = aws_db_instance.example.port
}
main.tf
module "db" {
source = "./module"
}
// ...
resource "some_docker_provider" "example" {
// ...
environment = [
{
name: "SPRING_DATASOURCE_URL",
value: "jdbc:postgresql://${module.db.db_host}:${module.db.db_port}"
}
]
// ...
}
Old answer
I presume that you are creating the database from the terraform scripts. If so, you should not use data, because data is read before any resource is created. So what's happening is you are trying to get information about db host and port before it is created.
Assuming you are creating DB instance with aws_db_instance, you should reference to the port and host like this:
resource "aws_db_instance" "example" {
// Arguments taken from example
allocated_storage = 10
engine = "mysql"
engine_version = "5.7"
instance_class = "db.t3.micro"
name = "mydb"
username = "foo"
password = "foobarbaz"
parameter_group_name = "default.mysql5.7"
skip_final_snapshot = true
}
// ...
// Locals are not required, you may use aws_db_instance.example. values directly
locals {
db_host = aws_db_instance.example.address
db_port = aws_db_instance.example.port
}
// ...
resource "some_docker_provider" "example" {
// ...
environment = [
{
name: "SPRING_DATASOURCE_URL",
value: "jdbc:postgresql://${local.db_host}:${local.db_port}"
}
]
// ...
}
This way, Terraform will know that first a DB instance must be created, because .address and .port are the values, that are known after a DB instance is created (a DB instance will be now the dependency of the docker container).
To get more information about the values that are returned after creation of the resource, refer to the Attributes reference in the provider's documentation. For instance, here is the reference for aws_db_instance: https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/db_instance#attributes-reference

Terraform output defined are empty [duplicate]

I'm trying to setup some IaC for a new project using Hashicorp Terraform on AWS. I'm using modules because I want to be able to reuse stuff across multiple environments (staging, prod, dev, etc.)
I'm struggling to understand where I have to set an output variable within a module, and how I then use that in another module. Any pointers to this would be greatly appreciated!
I need to use some things created in my VPC module (subnet IDs) when creating EC2 machines. My understanding is that you can't reference something from one module in another, so I am trying to use an output variable from the VPC module.
I have the following in my site main.tf
module "myapp-vpc" {
source = "dev/vpc"
aws_region = "${var.aws_region}"
}
module "myapp-ec2" {
source = "dev/ec2"
aws_region = "${var.aws_region}"
subnet_id = "${module.vpc.subnetid"}
}
dev/vpc simply sets some values and uses my vpc module:
module "vpc" {
source = "../../modules/vpc"
aws_region = "${var.aws_region}"
vpc-cidr = "10.1.0.0/16"
public-subnet-cidr = "10.1.1.0/24"
private-subnet-cidr = "10.1.2.0/24"
}
In my vpc main.tf, I have the following at the very end, after the aws_vpc and aws_subnet resources (showing subnet resource):
resource "aws_subnet" "public" {
vpc_id = "${aws_vpc.main.id}"
map_public_ip_on_launch = true
availability_zone = "${var.aws_region}a"
cidr_block = "${var.public-subnet-cidr}"
}
output "subnetid" {
value = "${aws_subnet.public.id}"
}
When I run terraform plan I get the following error message:
Error: module 'vpc': "subnetid" is not a valid output for module "vpc"
Outputs need to be passed up through each module explicitly each time.
For example if you wanted to output a variable to the screen from a module nested below another module you would need something like this:
child-module.tf
output "child_foo" {
value = "foobar"
}
parent-module.tf
module "child" {
source = "path/to/child"
}
output "parent_foo" {
value = "${module.child.child_foo}"
}
main.tf
module "parent" {
source = "path/to/parent"
}
output "main_foo" {
value = "${module.parent.parent_foo}"
}

Terraform depends_on with modules

I'm new at terraform and I created a custom azure policies on module structure.
each policy represents a custom module.
One of the modules that I have created is enabling diagnostics logs for any new azure resource created.
but, I need a storage account for that. (before enabling the diagnostics settings how can I implement "depends_on"? or any other methods?
I want to create first the storage account and then the module of diagnostics settings.
on the main.tf (where calling all the other modules) or inside the resource (module)?
Thanks for the help!! :)
this below code represents the main.tf file:
//calling the create storage account name
module "createstorageaccount" {
source = "./modules/module_create_storage_account"
depends_on = [
"module_enable_diagnostics_logs"
]
}
this one represents the create storage account module
resource "azurerm_resource_group" "management" {
name = "management-rg"
location = "West Europe"
}
resource "azurerm_storage_account" "test" {
name = "diagnostics${azurerm_resource_group.management.name}"
resource_group_name = "${azurerm_resource_group.management.name}"
location = "${azurerm_resource_group.management.location}"
account_tier = "Standard"
account_replication_type = "LRS"
tags = {
environment = "diagnostics"
}
}
depends_on = [
"module_enable_diagnostics_logs"
]
In most cases, the necessary dependencies just occur automatically as a result of your references. If the configuration for one resource refers directly or indirectly to another, Terraform automatically infers the dependency between them without the need for explicit depends_on.
This works because module variables and outputs are also nodes in the dependency graph: if a child module resource refers to var.foo then it indirectly depends on anything that the value of that variable depends on.
For the rare situation where automatic dependency detection is insufficient, you can still exploit the fact that module variables and outputs are nodes in the dependency graph to create indirect explicit dependencies, like this:
variable "storage_account_depends_on" {
# the value doesn't matter; we're just using this variable
# to propagate dependencies.
type = any
default = []
}
resource "azurerm_storage_account" "test" {
name = "diagnostics${azurerm_resource_group.management.name}"
resource_group_name = "${azurerm_resource_group.management.name}"
location = "${azurerm_resource_group.management.location}"
account_tier = "Standard"
account_replication_type = "LRS"
tags = {
environment = "diagnostics"
}
# This resource depends on whatever the variable
# depends on, indirectly. This is the same
# as using var.storage_account_depends_on in
# an expression above, but for situations where
# we don't actually need the value.
depends_on = [var.storage_account_depends_on]
}
When you call this module, you can set storage_account_depends_on to any expression that includes the objects you want to ensure are created before the storage account:
module "diagnostic_logs" {
source = "./modules/diagnostic_logs"
}
module "storage_account" {
source = "./modules/storage_account"
storage_account_depends_on = [module.diagnostic_logs.logging]
}
Then in your diagnostic_logs module you can configure indirect dependencies for the logging output to complete the dependency links between the modules:
output "logging" {
# Again, the value is not important because we're just
# using this for its dependencies.
value = {}
# Anything that refers to this output must wait until
# the actions for azurerm_monitor_diagnostic_setting.example
# to have completed first.
depends_on = [azurerm_monitor_diagnostic_setting.example]
}
If your relationships can be expressed by passing actual values around, such as by having an output that includes the id, I'd recommend preferring that approach because it leads to a configuration that is easier to follow. But in rare situations where there are relationships between resources that cannot be modeled as data flow, you can use outputs and variables to propagate explicit dependencies between modules too.
module dependencies are now supported in Terraform 13, this is currently at the release candidate stage.
resource "aws_iam_policy_attachment" "example" {
name = "example"
roles = [aws_iam_role.example.name]
policy_arn = aws_iam_policy.example.arn
}
module "uses-role" {
# ...
depends_on = [aws_iam_policy_attachment.example]
}
Using depends_on at resource level is different from using depends_on at inter-module level i found very simple way to do to it at module level
module "eks" {
source = "../modules/eks"
vpc_id = module.vpc.vpc_id
vpc_cidr = [module.vpc.vpc_cidr_block]
public_subnets = flatten([module.vpc.public_subnets])
private_subnets_id = flatten([module.vpc.private_subnets])
depends_on = [module.vpc]
}
i created dependencies directly with module simple as simplest no complex relation reequired

How to create provide modules that support multiple AWS regions?

We are trying to create Terraform modules for below activities in AWS, so that we can use them where ever that is required.
VPC creation
Subnets creation
Instance creation etc.
But while creating these modules we have to define the provider in all above listed modules. So we decided to create one more module for provider so that we can call that provider module in other modules (VPC, Subnet, etc.).
Issue in above approach is that it is not taking provider value, and asking for the user input for region.
Terraform configuration is as follow:
$HOME/modules/providers/main.tf
provider "aws" {
region = "${var.region}"
}
$HOME/modules/providers/variables.tf
variable "region" {}
$HOME/modules/vpc/main.tf
module "provider" {
source = "../../modules/providers"
region = "${var.region}"
}
resource "aws_vpc" "vpc" {
cidr_block = "${var.vpc_cidr}"
tags = {
"name" = "${var.environment}_McD_VPC"
}
}
$HOME/modules/vpc/variables.tf
variable "vpc_cidr" {}
variable "environment" {}
variable "region" {}
$HOME/main.tf
module "dev_vpc" {
source = "modules/vpc"
vpc_cidr = "${var.vpc_cidr}"
environment = "${var.environment}"
region = "${var.region}"
}
$HOME/variables.tf
variable "vpc_cidr" {
default = "192.168.0.0/16"
}
variable "environment" {
default = "dev"
}
variable "region" {
default = "ap-south-1"
}
Then when running terraform plan command at $HOME/ location it is not taking provider value and instead asking for the user input for region.
I need help from the Terraform experts, what approach we should follow to address below concerns:
Wrap provider in a Terraform module
Handle multiple region use case using provider module or any other way.
I knew a long time back that it wasn't possible to do this because Terraform built a graph that required a provider for any resource before it included any dependencies and it didn't used to be possible to force a dependency on a module.
However since Terraform 0.8 it is now possible to set a dependency on modules with the following syntax:
module "network" {
# ...
}
resource "aws_instance" "foo" {
# ...
depends_on = ["module.network"]
}
However, if I try that with your setup by changing modules/vpc/main.tf to look something like this:
module "aws_provider" {
source = "../../modules/providers"
region = "${var.region}"
}
resource "aws_vpc" "vpc" {
cidr_block = "${var.vpc_cidr}"
tags = {
"name" = "${var.environment}_McD_VPC"
}
depends_on = ["module.aws_provider"]
}
And run terraform graph | dot -Tpng > graph.png against it it looks like the graph doesn't change at all from when the explicit dependency isn't there.
This seems like it might be a potential bug in the graph building stage in Terraform that should probably be raised as an issue but I don't know the core code base well enough to spot where the change needs to be.
For our usage we use symlinks heavily in our Terraform code base, some of which is historic from before Terraform supported other ways of doing things but could work for you here.
We simply define the provider in a single .tf file (such as environment.tf) along with any other generic config needed for every place you would ever run Terraform (ie not at a module level) and then symlink this into each location. That allows us to define the provider in a single place with overridable variables if necessary.
Step 1
Add region alias in the main.tf file where you gonna execute the terraform plan.
provider "aws" {
region = "eu-west-1"
alias = "main"
}
provider "aws" {
region = "us-east-1"
alias = "useast1"
}
Step 2
Add providers block inside your module definition block
module "lambda_edge_rule" {
providers = {
aws = aws.useast1
}
source = "../../../terraform_modules/lambda"
tags = var.tags
}
Step 3
Define "aws" as providers inside your module. ( source = ../../../terraform_modules/lambda")
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = ">= 2.7.0"
}
}
}
resource "aws_lambda_function" "lambda" {
function_name = "blablabla"
.
.
.
.
.
.
.
}
Note: Terraform version v1.0.5 as of now.

Resources