Terraform using variables and other resources in modules - terraform

I am refactoring my terraform code to use modules. But I am getting alot of variable/resource not defined errors/found.
I found that I need to move my variable [name] {} blocks into the module. Isit impossible for modules to reference parent/another modules variables? Eg. sometimes I may have some reused variables eg. NODE_ENV.
Next ... after this, I find that it says missing required argument. I am just running terraform init because terraform says I need to do it... I tried adding -var-file but it does not seem to work for modules? How should I resolve this?
There are also alot of
resource 'aws_ecs_service.xxx-ecs-service' config: unknown module referenced: ecs
errors ... it appears I cannot reference resources the usual way anymore?
# ecs/ecs.tf
resource "aws_ecs_task_definition" "xxx-ecs-task" {
family = "${var.family}"
container_definitions = "${data.template_file.container_defination.rendered}"
task_role_arn = "${var.role_arn}"
execution_role_arn = "${var.role_arn}"
network_mode = "awsvpc"
cpu = "${var.cpu}"
memory = "${var.memory}"
requires_compatibilities = ["FARGATE"]
tags = "${var.tags}"
}
resource "aws_ecs_service" "xxx-ecs-service" {
name = "${var.service_name}"
cluster = "${var.ecs_cluster}"
task_definition = "${module.ecs.aws_ecs_task_definition.pinfare-ecs-task.arn}"
}
For the task_defination, I tried adding module.ecs since ecs is the name of my module
module "ecs" {
source = "./ecs"
name = "ecs"
}

To avoid confusion, would recommend complete directory separation.
module/ecs/ecs.tf
env/dev/myecs.tf
Below is illustration purpose and have not tested by myself.
Configurations
module/ecs/ecs.tf
resource "aws_ecs_task_definition" "xxx-ecs-task" {
family = "${var.family}"
container_definitions = "${data.template_file.container_defination.rendered}"
task_role_arn = "${var.role_arn}"
execution_role_arn = "${var.role_arn}"
network_mode = "awsvpc"
cpu = "${var.cpu}"
memory = "${var.memory}"
requires_compatibilities = ["FARGATE"]
tags = "${var.tags}"
}
resource "aws_ecs_service" "xxx-ecs-service" {
name = "${var.service_name}"
cluster = "${var.ecs_cluster}"
task_definition = "${aws_ecs_task_definition.xxx-ecs-task.arn}"
}
module/ecs/variables.tf
Define all the variables to be passed from the module user side (env/dev/myecs.tf).
variable "family" {
}
variable "role_arn" {
}
...
env/dev/myeecs.tf
module "ecs" {
source = "../../module/ecs"
family = "value of family here"
role_arn = "value of IAM role arn"
...
}
module path
Please beware of ${path.module} as in Paths and Embedded Files to specify a path to a file within a module. Confusingly both env/dev/ and module/ecs are module in Terraform where env/dev/ is the root module.
Creating Modules
Modules in Terraform are folders with Terraform files. In fact, when you run terraform apply, the current working directory holding the Terraform files you're applying comprise what is called the root module. This itself is a valid module.
When specify a path, basically it is that within the root module. Therefore within a called module (module/ecs), prefix with ${path.module}/ so that the called module will not try to load files inside env/dev.
Terraform Registry
I would recommend having a look at Terraform Registry, for instance Security Group module to get used to how to use Terraform module. Links to GitHub are there too.

Related

Terraform-AWS-Modules-using subnet-id of a VPC child module in another ec2 child module

I will start by thanking you for your time and I have been googling and reading and maybe just overlooking something very simple. I have tried my best with some articles on medium and the terraform documentation.
So, my problem is..
I have a root module that looks like this
module "VPC" {
source = "/home/jamie/Terraform_Project/modules/vpc/"
}
module "Key_Pair" {
source = "/home/jamie/Terraform_Project/modules/key_pair/"
}
module "EC2_VPN" {
source = "/home/jamie/Terraform_Project/modules/ec2_vpn/"
}
and three child modules as you can see. I cannot reference the "Public_Subnet_ID" from my VPC module in my EC2 module. I will show my main.tfs and my output.tfs below. I think its worth mentioning that I have tried various things I have found on google and don't seem to get anywhere below is my latest attempt. i have seen other answers on stackoverflow but they have not worked for me or i am still doing something wrong.
VPC - main.tf (will show subnet bit only)
/* Public Subnet */
resource "aws_subnet" "public_subnet" {
vpc_id = aws_vpc.main.id
cidr_block = "10.0.1.0/24"
map_public_ip_on_launch = true
tags = {
Name = "Public"
Project = var.project
Architect = var.architect
}
}
VPC - output.tf
output "public_subnet_id" {
value = aws_subnet.public_subnet.id
}
**EC2 - main.tf (problem bit)
resource "aws_instance" "web" {
ami = data.aws_ami.ubuntu.id
instance_type = "t3.micro"
subnet_id = var.public_subnet_id
key_name = "${var.project}_Key"
tags = {
Name = "VPN_Server"
Project = var.project
Architect = var.architect
}
}
i also tried the above with variable (maybe wrong from another thread/guide)
my first errors where the "module.1_VPC.Public_Subnet_id" wasnt referenced but managed to get that bit but now it just ends up with
Error: creating EC2 Instance: InvalidSubnetID.NotFound: The subnet ID 'module.1_VPC.Public_Subnet_id' does not exist
│ status code: 400, request id: 00fa3944-4ea3-450b-9fd4-39645785269f
│
│ with module.EC2_VPN.aws_instance.web,
│ on .terraform/modules/EC2_VPN/main.tf line 17, in resource "aws_instance" "web":
│ 17: resource "aws_instance" "web" {
Again thankyou for taking the time, I am learning and trying to build / learn as I go not just copy and paste other templates.
tried various guides / terraform docs (most ref modules but in same file not separated folders)
i just need to be able to export a resourse_id for use in another child modules. once i can do this i will be able to duplicate for security groups and anything else i need to ref.
DISCLAIMER: Module development is very dynamic and sometimes subjective. Below are some points to start with and to support with your query.
There seem to be many things to be addressed in your modules but I would try to share the minimal info you need to fix and initialize this.
1. In your VPC child module you usually do not need the subnet_id attribute.
module "VPC" {
source = "/home/jamie/Terraform_Project/modules/1_VPC/"
# subnet_id = "module.VPC.Public_Subnet_id"
}
Because you are creating the subnet itself in the child module it is not required.
Only required inputs are needed to be defined here, in simple words variables definitions whose default values are not set or which you want to override are needed in the root module while calling the child module.
Output for subnet_id and any other attribute which makes sense to be used by other modules should be defined.
2. Parameterize the EC2 child module
resource "aws_instance" "web" {
ami = data.aws_ami.ubuntu.id
instance_type = "t3.micro"
subnet_id = var.subnet_id
key_name = "${var.project}_Key"
tags = {
Name = "VPN_Server"
Project = "${var.project}"
Architect = "${var.architect}"
}
}
You can/should parameterize all the inputs which are dynamic just like in any module development so that it can be overridden/set on the root module if required.
One side note that "${var.project}" is not mandatory var.project is totally fine with recent terraform versions.
In this case, I have updated subnet_id = var.subnet_id
3. Adapt the EC2 root module with new changes.
module "EC2_VPN" {
source = "/home/jamie/Terraform_Project/modules/3_EC2_VPN/"
subnet_id = module.VPC.subnet_id
}
As mentioned in 2. point you need a variable defined in your child module of EC2 and output for subnet_id in your VPC child module with the value of subnet_id you need here to pass.
Helpful documentation
Creating Terraform Modules
Reuse Configuration with Modules Tutorials
Build and Use local Module

Trying to use nested Terraform modules from a private registry

I have an issue with Terraform and modules that call modules. Whenever I call my nested module the only output I get is:
No changes. Your infrastructure matches the configuration.
Terraform has compared your real infrastructure against your configuration
and found no differences, so no changes are needed.
For context here is what I'm trying to do:
I have a simple module that creates an EC2 instance with certain options. (This module creates an aws_instance resource.)
I have a second module that calls my first and creates three of the above EC2 instances and sets a unique name for each node.
I have a Terraform project that simply calls the second module.
The two above modules are saved in my private GitLab registry. If I created a Terraform project that calls just the first module, it will create the EC2 instance. If I execute the second module as if it were just a normal project three EC2 instances are created. My issue is when I call the second module which then should call the first, but that it is where it falls apart.
I'm wondering if this configuration is actually supported. The use case I'm trying to test is standing up a Kubernetes (K8s) cluster. I create a module that defines up compute resource. I then create a second module that defines the compute modules with the options needed for a K8s cluster. Finally, I have a project that defines how many K8s nodes needed and in which region/availability zones.
I call this nested modules, but in all of my searching nested modules seems to only refer to modules and sub-modules that live inside of the Terraform project that is calling them.
Any help here would be greatly appreciated. Here is how I create the resources and call the modules. These are simple examples I'm using for testing and not anything like the K8s modules I'll be creating. I'm just trying to figure out if I can get nested private registry modules working.
First Module (compute)
resource "aws_instance" "poc-instance" {
ami = var.ol8_ami
key_name = var.key_name
monitoring = true
vpc_security_group_ids = [
data.aws_security_group.internal-ssh-only.id,
]
root_block_device {
delete_on_termination = true
encrypted = true
volume_type = var.ebs_volume_type
volume_size = var.ebs_volume_size
}
availability_zone = var.availability_zone
subnet_id = var.subnet_id
instance_type = var.instance_type
tags = var.tags
metadata_options {
http_tokens = "required"
http_endpoint = "disabled"
}
}
Second Module (nested-cluster)
module "ec2_instance" {
source = "gitlab.example.com/terraform/compute/aws"
for_each = {
0 = "01"
1 = "02"
2 = "03"
}
tags = {
"Name" = "poc-nested_ec2${each.value}"
"service" = "terraform test"
}
}
Terraform Project
module "sample_cluster" {
source = "gitlab.example.com/terraform/nested-cluster/aws"
}

Terraform depends_on with modules

I'm new at terraform and I created a custom azure policies on module structure.
each policy represents a custom module.
One of the modules that I have created is enabling diagnostics logs for any new azure resource created.
but, I need a storage account for that. (before enabling the diagnostics settings how can I implement "depends_on"? or any other methods?
I want to create first the storage account and then the module of diagnostics settings.
on the main.tf (where calling all the other modules) or inside the resource (module)?
Thanks for the help!! :)
this below code represents the main.tf file:
//calling the create storage account name
module "createstorageaccount" {
source = "./modules/module_create_storage_account"
depends_on = [
"module_enable_diagnostics_logs"
]
}
this one represents the create storage account module
resource "azurerm_resource_group" "management" {
name = "management-rg"
location = "West Europe"
}
resource "azurerm_storage_account" "test" {
name = "diagnostics${azurerm_resource_group.management.name}"
resource_group_name = "${azurerm_resource_group.management.name}"
location = "${azurerm_resource_group.management.location}"
account_tier = "Standard"
account_replication_type = "LRS"
tags = {
environment = "diagnostics"
}
}
depends_on = [
"module_enable_diagnostics_logs"
]
In most cases, the necessary dependencies just occur automatically as a result of your references. If the configuration for one resource refers directly or indirectly to another, Terraform automatically infers the dependency between them without the need for explicit depends_on.
This works because module variables and outputs are also nodes in the dependency graph: if a child module resource refers to var.foo then it indirectly depends on anything that the value of that variable depends on.
For the rare situation where automatic dependency detection is insufficient, you can still exploit the fact that module variables and outputs are nodes in the dependency graph to create indirect explicit dependencies, like this:
variable "storage_account_depends_on" {
# the value doesn't matter; we're just using this variable
# to propagate dependencies.
type = any
default = []
}
resource "azurerm_storage_account" "test" {
name = "diagnostics${azurerm_resource_group.management.name}"
resource_group_name = "${azurerm_resource_group.management.name}"
location = "${azurerm_resource_group.management.location}"
account_tier = "Standard"
account_replication_type = "LRS"
tags = {
environment = "diagnostics"
}
# This resource depends on whatever the variable
# depends on, indirectly. This is the same
# as using var.storage_account_depends_on in
# an expression above, but for situations where
# we don't actually need the value.
depends_on = [var.storage_account_depends_on]
}
When you call this module, you can set storage_account_depends_on to any expression that includes the objects you want to ensure are created before the storage account:
module "diagnostic_logs" {
source = "./modules/diagnostic_logs"
}
module "storage_account" {
source = "./modules/storage_account"
storage_account_depends_on = [module.diagnostic_logs.logging]
}
Then in your diagnostic_logs module you can configure indirect dependencies for the logging output to complete the dependency links between the modules:
output "logging" {
# Again, the value is not important because we're just
# using this for its dependencies.
value = {}
# Anything that refers to this output must wait until
# the actions for azurerm_monitor_diagnostic_setting.example
# to have completed first.
depends_on = [azurerm_monitor_diagnostic_setting.example]
}
If your relationships can be expressed by passing actual values around, such as by having an output that includes the id, I'd recommend preferring that approach because it leads to a configuration that is easier to follow. But in rare situations where there are relationships between resources that cannot be modeled as data flow, you can use outputs and variables to propagate explicit dependencies between modules too.
module dependencies are now supported in Terraform 13, this is currently at the release candidate stage.
resource "aws_iam_policy_attachment" "example" {
name = "example"
roles = [aws_iam_role.example.name]
policy_arn = aws_iam_policy.example.arn
}
module "uses-role" {
# ...
depends_on = [aws_iam_policy_attachment.example]
}
Using depends_on at resource level is different from using depends_on at inter-module level i found very simple way to do to it at module level
module "eks" {
source = "../modules/eks"
vpc_id = module.vpc.vpc_id
vpc_cidr = [module.vpc.vpc_cidr_block]
public_subnets = flatten([module.vpc.public_subnets])
private_subnets_id = flatten([module.vpc.private_subnets])
depends_on = [module.vpc]
}
i created dependencies directly with module simple as simplest no complex relation reequired

How to create provide modules that support multiple AWS regions?

We are trying to create Terraform modules for below activities in AWS, so that we can use them where ever that is required.
VPC creation
Subnets creation
Instance creation etc.
But while creating these modules we have to define the provider in all above listed modules. So we decided to create one more module for provider so that we can call that provider module in other modules (VPC, Subnet, etc.).
Issue in above approach is that it is not taking provider value, and asking for the user input for region.
Terraform configuration is as follow:
$HOME/modules/providers/main.tf
provider "aws" {
region = "${var.region}"
}
$HOME/modules/providers/variables.tf
variable "region" {}
$HOME/modules/vpc/main.tf
module "provider" {
source = "../../modules/providers"
region = "${var.region}"
}
resource "aws_vpc" "vpc" {
cidr_block = "${var.vpc_cidr}"
tags = {
"name" = "${var.environment}_McD_VPC"
}
}
$HOME/modules/vpc/variables.tf
variable "vpc_cidr" {}
variable "environment" {}
variable "region" {}
$HOME/main.tf
module "dev_vpc" {
source = "modules/vpc"
vpc_cidr = "${var.vpc_cidr}"
environment = "${var.environment}"
region = "${var.region}"
}
$HOME/variables.tf
variable "vpc_cidr" {
default = "192.168.0.0/16"
}
variable "environment" {
default = "dev"
}
variable "region" {
default = "ap-south-1"
}
Then when running terraform plan command at $HOME/ location it is not taking provider value and instead asking for the user input for region.
I need help from the Terraform experts, what approach we should follow to address below concerns:
Wrap provider in a Terraform module
Handle multiple region use case using provider module or any other way.
I knew a long time back that it wasn't possible to do this because Terraform built a graph that required a provider for any resource before it included any dependencies and it didn't used to be possible to force a dependency on a module.
However since Terraform 0.8 it is now possible to set a dependency on modules with the following syntax:
module "network" {
# ...
}
resource "aws_instance" "foo" {
# ...
depends_on = ["module.network"]
}
However, if I try that with your setup by changing modules/vpc/main.tf to look something like this:
module "aws_provider" {
source = "../../modules/providers"
region = "${var.region}"
}
resource "aws_vpc" "vpc" {
cidr_block = "${var.vpc_cidr}"
tags = {
"name" = "${var.environment}_McD_VPC"
}
depends_on = ["module.aws_provider"]
}
And run terraform graph | dot -Tpng > graph.png against it it looks like the graph doesn't change at all from when the explicit dependency isn't there.
This seems like it might be a potential bug in the graph building stage in Terraform that should probably be raised as an issue but I don't know the core code base well enough to spot where the change needs to be.
For our usage we use symlinks heavily in our Terraform code base, some of which is historic from before Terraform supported other ways of doing things but could work for you here.
We simply define the provider in a single .tf file (such as environment.tf) along with any other generic config needed for every place you would ever run Terraform (ie not at a module level) and then symlink this into each location. That allows us to define the provider in a single place with overridable variables if necessary.
Step 1
Add region alias in the main.tf file where you gonna execute the terraform plan.
provider "aws" {
region = "eu-west-1"
alias = "main"
}
provider "aws" {
region = "us-east-1"
alias = "useast1"
}
Step 2
Add providers block inside your module definition block
module "lambda_edge_rule" {
providers = {
aws = aws.useast1
}
source = "../../../terraform_modules/lambda"
tags = var.tags
}
Step 3
Define "aws" as providers inside your module. ( source = ../../../terraform_modules/lambda")
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = ">= 2.7.0"
}
}
}
resource "aws_lambda_function" "lambda" {
function_name = "blablabla"
.
.
.
.
.
.
.
}
Note: Terraform version v1.0.5 as of now.

Terraform use case to create multiple almost identical copies of infrastructure

I have TF templates whose purpose is to create multiple copies of the same cloud infrastructure. For example you have multiple business units inside a big organization, and you want to build out the same basic networks. Or you want an easy way for a developer to spin up the stack that he's working on. The only difference between "tf apply" invokations is the variable BUSINESS_UNIT, for example, which is passed in as an environment variable.
Is anyone else using a system like this, and if so, how do you manage the state files ?
You should use a Terraform Module. Creating a module is nothing special: just put any Terraform templates in a folder. What makes a module special is how you use it.
Let's say you put the Terraform code for your infrastructure in the folder /terraform/modules/common-infra. Then, in the templates that actually define your live infrastructure (e.g. /terraform/live/business-units/main.tf), you could use the module as follows:
module "business-unit-a" {
source = "/terraform/modules/common-infra"
}
To create the infrastructure for multiple business units, you could use the same module multiple times:
module "business-unit-a" {
source = "/terraform/modules/common-infra"
}
module "business-unit-b" {
source = "/terraform/modules/common-infra"
}
module "business-unit-c" {
source = "/terraform/modules/common-infra"
}
If each business unit needs to customize some parameters, then all you need to do is define an input variable in the module (e.g. under /terraform/modules/common-infra/vars.tf):
variable "business_unit_name" {
description = "The name of the business unit"
}
Now you can set this variable to a different value each time you use the module:
module "business-unit-a" {
source = "/terraform/modules/common-infra"
business_unit_name = "a"
}
module "business-unit-b" {
source = "/terraform/modules/common-infra"
business_unit_name = "b"
}
module "business-unit-c" {
source = "/terraform/modules/common-infra"
business_unit_name = "c"
}
For more information, see How to create reusable infrastructure with Terraform modules and Terraform: Up & Running.
There's two ways of doing this that jump to mind.
Firstly, you could go down the route of using the same Terraform configuration folder that you apply and simply pass in a variable when running Terraform (either via the command line or through environment variables). You'd also want to have the same wrapper script that calls Terraform to configure your state settings to make them differ.
This might end up with something like this:
variable "BUSINESS_UNIT" {}
variable "ami" { default = "ami-123456" }
resource "aws_instance" "web" {
ami = "${var.ami}"
instance_type = "t2.micro"
tags {
Name = "web"
Business_Unit = "${var.BUSINESS_UNIT}"
}
}
resource "aws_db_instance" "default" {
allocated_storage = 10
engine = "mysql"
engine_version = "5.6.17"
instance_class = "db.t2.micro"
name = "${var.BUSINESS_UNIT}"
username = "foo"
password = "bar"
db_subnet_group_name = "db_subnet_group"
parameter_group_name = "default.mysql5.6"
}
Which creates an EC2 instance and an RDS instance. You would then call that with something like this:
#!/bin/bash
if [ "$#" -ne 1 ]; then
echo "Illegal number of parameters - specify business unit as positional parameter"
fi
business_unit=$1
terraform remote config -backend="s3" \
-backend-config="bucket=${business_unit}" \
-backend-config="key=state"
terraform remote pull
terraform apply -var 'BUSINESS_UNIT=${business_unit}'
terraform remote push
As an alternative route you might want to consider using modules to wrap your Terraform configuration.
So instead you might have something that now looks like:
web-instance/main.tf
variable "BUSINESS_UNIT" {}
variable "ami" { default = "ami-123456" }
resource "aws_instance" "web" {
ami = "${var.ami}"
instance_type = "t2.micro"
tags {
Name = "web"
Business_Unit = "${var.BUSINESS_UNIT}"
}
}
db-instance/main.tf
variable "BUSINESS_UNIT" {}
resource "aws_db_instance" "default" {
allocated_storage = 10
engine = "mysql"
engine_version = "5.6.17"
instance_class = "db.t2.micro"
name = "${var.BUSINESS_UNIT}"
username = "foo"
password = "bar"
db_subnet_group_name = "db_subnet_group"
parameter_group_name = "default.mysql5.6"
}
And then you might have different folders that call these modules per business unit:
business-unit-1/main.tf
variable "BUSINESS_UNIT" { default = "business-unit-1" }
module "web_instance" {
source = "../web-instance"
BUSINESS_UNIT = "${var.BUSINESS_UNIT}"
}
module "db_instance" {
source = "../db-instance"
BUSINESS_UNIT = "${var.BUSINESS_UNIT}"
}
and
business-unit-2/main.tf
variable "BUSINESS_UNIT" { default = "business-unit-2" }
module "web_instance" {
source = "../web-instance"
BUSINESS_UNIT = "${var.BUSINESS_UNIT}"
}
module "db_instance" {
source = "../db-instance"
BUSINESS_UNIT = "${var.BUSINESS_UNIT}"
}
You still need a wrapper script to manage state configuration as before but going this route enables you to provide a rough template in your modules and then hard code certain extra configuration by business unit such as the instance size or the number of instances that are built for them.
This is rather popular use case. To archive this you can let developers to pass variable from command-line or from tfvars file into resource to make different resources unique:
main.tf:
resource "aws_db_instance" "db" {
identifier = "${var.BUSINESS_UNIT}"
# ... read more in docs
}
$ terraform apply -var 'BUSINESS_UNIT=unit_name'
PS: We do this often to provision infrastructure for specific git branch name, and since all resources are identifiable and are located in separate tfstate files, we can safely destroy them when we don't need them.

Resources