I've just learned some basics about Terraform and created simple reusable Terraform module with AWS EC2 isntance and some additional resources (security groups, ellastic ip, ssh key-pairs etc), it's available on https://github.com/g4s8/docker-worker
Now I want to copy local files to EC2 instance on create (I gues using file provisioner). Is it possible to define files variable like this:
module "worker" {
# ...
files = [
{
src: "./file1",
dst: "/home/ec2-user/file1"
},
{
src: "./file2",
dst: "/home/ec2-user/file2"
}
]
}
and create dynamic provisioners based on these variables:
resource "aws_instance" "worker" {
# pseudocode:
# for (file : var.files) {
provisioner "file" {
source = file.source
destination = file.destination
}
# }
}
So my question: is it possible to dynamically generate provisioners in module, if yes how can I implement iterating over all items of files to create it? I didn't find some references in input variable documentation about it.
Related
I'm looking to automate on specific part based on a very complicated Terraform script that I have.
To make a bit clear I have created TF template with Deployment of entire infra into Azure with App Services, Storage account, Security groups, Windows based VM's, Linux based VM's split for MongoDB and RabbitMQ. Inside my script I was able to automate deployment to use the name of the application and create Synthetic Test and using a local variable plus based on to the environment to use a specific Datadog Key using the local variable ability
keyTouse = lower(var.environment) != "production" ? var.DatadogNPD : var.DatadogPRD
right now the point that bothers me is the following.
Since we are not in a need to use Synthetic tests based on NON Production environment i would like to use some sort of logic and not deploy Synthetic tests if the var.environment is not "PRODUCTION"
To make this part more interesting i also have the ability to deploy multiple Synthetic Test using the "count" and "length" shown below
inside main.tf
module "Datadog" {
source = "./Datadog"
webapp_name = [ azurerm_linux_web_app.service1.name, azurerm_linux_web_app.service2.name ]
}
and for Datadog Synthetic Test
resource "datadog_synthetics_test" "app_service_monitoring" {
count = length(var.webapp_name)
type = "api"
subtype = "http"
request_definition {
method = "GET"
url = "https://${element(var.webapp_name, count.index)}.azurewebsites.net/health"
}
Could you help me and suggest how can I enable or disable modules deployment using the variable based on environment?
Based on my understanding of the question, the change would have to be two-fold:
Add an environment variable to the module code
Use that variable for deciding if the synthetics test resource should be created or not
The above translates to creating another variable in the module and later on providing that variable a value when calling the module. The last part would be deciding based on that if the resource gets created.
# module level variable
variable "environment" {
type = string
description = "Environment in which to deploy resources."
}
Then, in the resource, you would add the following:
resource "datadog_synthetics_test" "app_service_monitoring" {
count = var.environment == "production" ? length(var.webapp_name) : 0
type = "api"
subtype = "http"
request_definition {
method = "GET"
url = "https://${element(var.webapp_name, count.index)}.azurewebsites.net/health"
}
}
And finally, in the root module:
module "Datadog" {
source = "./Datadog"
webapp_name = [ azurerm_linux_web_app.service1.name, azurerm_linux_web_app.service2.name ]
environment = var.environment
}
The environment = var.environment line will work if you have also defined the variable environment in the root module. If not you can always set it to a value you want:
module "Datadog" {
source = "./Datadog"
webapp_name = [ azurerm_linux_web_app.service1.name, azurerm_linux_web_app.service2.name ]
environment = "dev" # <--- or "production" or any other environment you have
}
I have an issue with Terraform and modules that call modules. Whenever I call my nested module the only output I get is:
No changes. Your infrastructure matches the configuration.
Terraform has compared your real infrastructure against your configuration
and found no differences, so no changes are needed.
For context here is what I'm trying to do:
I have a simple module that creates an EC2 instance with certain options. (This module creates an aws_instance resource.)
I have a second module that calls my first and creates three of the above EC2 instances and sets a unique name for each node.
I have a Terraform project that simply calls the second module.
The two above modules are saved in my private GitLab registry. If I created a Terraform project that calls just the first module, it will create the EC2 instance. If I execute the second module as if it were just a normal project three EC2 instances are created. My issue is when I call the second module which then should call the first, but that it is where it falls apart.
I'm wondering if this configuration is actually supported. The use case I'm trying to test is standing up a Kubernetes (K8s) cluster. I create a module that defines up compute resource. I then create a second module that defines the compute modules with the options needed for a K8s cluster. Finally, I have a project that defines how many K8s nodes needed and in which region/availability zones.
I call this nested modules, but in all of my searching nested modules seems to only refer to modules and sub-modules that live inside of the Terraform project that is calling them.
Any help here would be greatly appreciated. Here is how I create the resources and call the modules. These are simple examples I'm using for testing and not anything like the K8s modules I'll be creating. I'm just trying to figure out if I can get nested private registry modules working.
First Module (compute)
resource "aws_instance" "poc-instance" {
ami = var.ol8_ami
key_name = var.key_name
monitoring = true
vpc_security_group_ids = [
data.aws_security_group.internal-ssh-only.id,
]
root_block_device {
delete_on_termination = true
encrypted = true
volume_type = var.ebs_volume_type
volume_size = var.ebs_volume_size
}
availability_zone = var.availability_zone
subnet_id = var.subnet_id
instance_type = var.instance_type
tags = var.tags
metadata_options {
http_tokens = "required"
http_endpoint = "disabled"
}
}
Second Module (nested-cluster)
module "ec2_instance" {
source = "gitlab.example.com/terraform/compute/aws"
for_each = {
0 = "01"
1 = "02"
2 = "03"
}
tags = {
"Name" = "poc-nested_ec2${each.value}"
"service" = "terraform test"
}
}
Terraform Project
module "sample_cluster" {
source = "gitlab.example.com/terraform/nested-cluster/aws"
}
We want to deploy services into several regions.
Looks like because of the aws provider, we can't just use count or for_each, as the provider can't be interpolated. Thus I need to set this up manually:
resource "aws_instance" "app-us-west-1" {
provider = aws.us-west-1
#other stuff
}
resource "aws_instance" "app-us-east-1" {
provider = aws.us-east-1
#other stuff
}
I would like when running this to create a file which contains all the IPs created (for an ansible inventory).
I was looking at this answer:
https://stackoverflow.com/a/61788089/169252
and trying to adapt it for my case:
resource "local_file" "app-hosts" {
content = templatefile("${path.module}/templates/app_hosts.tpl",
{
hosts = aws_instance[*].public_ip
}
)
filename = "app-hosts.cfg"
}
And then setting up the template accordingly.
But this fails:
Error: Invalid reference
on app.tf line 144, in resource "local_file" "app-hosts":
122: hosts = aws_instance[*].public_ip
A reference to a resource type must be followed by at least one attribute
access, specifying the resource name
I am suspecting that I can't just reference all the aws_instance defined as above like this. Maybe to refer to all aws_instance in this file I need to use a different syntax.
Or maybe I need to use a module somehow. Can someone confirm this?
Using terraform v0.12.24
EDIT: The provider definitions use alias and it's all in the same app.tf, which I was naively assuming to be able to apply in one go with terraform apply (did I mention I am a beginner with terraform?):
provider "aws" {
alias = "us-east-1"
region = "us-east-1"
}
provider "aws" {
alias = "us-west-1"
region = "us-west-1"
}
My current workaround is to not do a join but simply listing them all individually:
{
host1 = aws_instance.app-us-west-1.public_ip
host2 = aws_instance.app-us-east-1.public_ip
# more hosts
}
I am trying to use a 2-repo IaC with the so called back-end being in the form of terragrunt modules and the front-end (or live) with the instantiation of such modules that are being filled in with variables.
The image below depicts the structure of those 2 repos (terragrunt being the back-end and terraform-live the live one as the name implies).
In my terragrunt/aws-vpc/variables.tf, there is the following declaration:
variable "remote_state_bucket" {
description = "The bucket containing the terraform remote state"
}
However, when trying to perform a terragrunt apply in the live directory, i get the following:
var.remote_state_bucket
The bucket containing the terraform remote state
Enter a value:
Here is my terraform-live/environments/staging/terragrunt.hcl
remote_state {
backend = "s3"
config = {
bucket = "my-bucket-staging"
key = "terraform/state/var.env_name/${path_relative_to_include()}"
region = "eu-west-1"
}
}
# Configure root level variables that all resources can inherit
terraform {
extra_arguments "extra_args" {
commands = "${get_terraform_commands_that_need_vars()}"
optional_var_files = [
"${get_terragrunt_dir()}/${find_in_parent_folders("config.tfvars", "ignore")}",
"${get_terragrunt_dir()}/${find_in_parent_folders("secrets.auto.tfvars", "ignore")}",
]
}
}
What is more, the variable seems to be declared in one of the files that terragrunt is instructed to read variables from:
➢ cat terraform-live/environments/staging/config.tfvars
remote_state_bucket = "pkaramol-staging"
Why is terragrunt (or terraform ?) unable to read the specific variable?
➢ terragrunt --version
terragrunt version v0.19.29
➢ terraform --version
Terraform v0.12.4
Because config.tfvars is not in a parent folder :)
find_in_parent_folders looks in parent folders, but not in the current folder. And your config.tfvars is in the same folder as your terragrunt.hcl.
Try using something like:
optional_var_files = [
"${get_terragrunt_dir()}/config.tfvars",
"${get_terragrunt_dir()}/secrets.auto.tfvars",
]
I have TF templates whose purpose is to create multiple copies of the same cloud infrastructure. For example you have multiple business units inside a big organization, and you want to build out the same basic networks. Or you want an easy way for a developer to spin up the stack that he's working on. The only difference between "tf apply" invokations is the variable BUSINESS_UNIT, for example, which is passed in as an environment variable.
Is anyone else using a system like this, and if so, how do you manage the state files ?
You should use a Terraform Module. Creating a module is nothing special: just put any Terraform templates in a folder. What makes a module special is how you use it.
Let's say you put the Terraform code for your infrastructure in the folder /terraform/modules/common-infra. Then, in the templates that actually define your live infrastructure (e.g. /terraform/live/business-units/main.tf), you could use the module as follows:
module "business-unit-a" {
source = "/terraform/modules/common-infra"
}
To create the infrastructure for multiple business units, you could use the same module multiple times:
module "business-unit-a" {
source = "/terraform/modules/common-infra"
}
module "business-unit-b" {
source = "/terraform/modules/common-infra"
}
module "business-unit-c" {
source = "/terraform/modules/common-infra"
}
If each business unit needs to customize some parameters, then all you need to do is define an input variable in the module (e.g. under /terraform/modules/common-infra/vars.tf):
variable "business_unit_name" {
description = "The name of the business unit"
}
Now you can set this variable to a different value each time you use the module:
module "business-unit-a" {
source = "/terraform/modules/common-infra"
business_unit_name = "a"
}
module "business-unit-b" {
source = "/terraform/modules/common-infra"
business_unit_name = "b"
}
module "business-unit-c" {
source = "/terraform/modules/common-infra"
business_unit_name = "c"
}
For more information, see How to create reusable infrastructure with Terraform modules and Terraform: Up & Running.
There's two ways of doing this that jump to mind.
Firstly, you could go down the route of using the same Terraform configuration folder that you apply and simply pass in a variable when running Terraform (either via the command line or through environment variables). You'd also want to have the same wrapper script that calls Terraform to configure your state settings to make them differ.
This might end up with something like this:
variable "BUSINESS_UNIT" {}
variable "ami" { default = "ami-123456" }
resource "aws_instance" "web" {
ami = "${var.ami}"
instance_type = "t2.micro"
tags {
Name = "web"
Business_Unit = "${var.BUSINESS_UNIT}"
}
}
resource "aws_db_instance" "default" {
allocated_storage = 10
engine = "mysql"
engine_version = "5.6.17"
instance_class = "db.t2.micro"
name = "${var.BUSINESS_UNIT}"
username = "foo"
password = "bar"
db_subnet_group_name = "db_subnet_group"
parameter_group_name = "default.mysql5.6"
}
Which creates an EC2 instance and an RDS instance. You would then call that with something like this:
#!/bin/bash
if [ "$#" -ne 1 ]; then
echo "Illegal number of parameters - specify business unit as positional parameter"
fi
business_unit=$1
terraform remote config -backend="s3" \
-backend-config="bucket=${business_unit}" \
-backend-config="key=state"
terraform remote pull
terraform apply -var 'BUSINESS_UNIT=${business_unit}'
terraform remote push
As an alternative route you might want to consider using modules to wrap your Terraform configuration.
So instead you might have something that now looks like:
web-instance/main.tf
variable "BUSINESS_UNIT" {}
variable "ami" { default = "ami-123456" }
resource "aws_instance" "web" {
ami = "${var.ami}"
instance_type = "t2.micro"
tags {
Name = "web"
Business_Unit = "${var.BUSINESS_UNIT}"
}
}
db-instance/main.tf
variable "BUSINESS_UNIT" {}
resource "aws_db_instance" "default" {
allocated_storage = 10
engine = "mysql"
engine_version = "5.6.17"
instance_class = "db.t2.micro"
name = "${var.BUSINESS_UNIT}"
username = "foo"
password = "bar"
db_subnet_group_name = "db_subnet_group"
parameter_group_name = "default.mysql5.6"
}
And then you might have different folders that call these modules per business unit:
business-unit-1/main.tf
variable "BUSINESS_UNIT" { default = "business-unit-1" }
module "web_instance" {
source = "../web-instance"
BUSINESS_UNIT = "${var.BUSINESS_UNIT}"
}
module "db_instance" {
source = "../db-instance"
BUSINESS_UNIT = "${var.BUSINESS_UNIT}"
}
and
business-unit-2/main.tf
variable "BUSINESS_UNIT" { default = "business-unit-2" }
module "web_instance" {
source = "../web-instance"
BUSINESS_UNIT = "${var.BUSINESS_UNIT}"
}
module "db_instance" {
source = "../db-instance"
BUSINESS_UNIT = "${var.BUSINESS_UNIT}"
}
You still need a wrapper script to manage state configuration as before but going this route enables you to provide a rough template in your modules and then hard code certain extra configuration by business unit such as the instance size or the number of instances that are built for them.
This is rather popular use case. To archive this you can let developers to pass variable from command-line or from tfvars file into resource to make different resources unique:
main.tf:
resource "aws_db_instance" "db" {
identifier = "${var.BUSINESS_UNIT}"
# ... read more in docs
}
$ terraform apply -var 'BUSINESS_UNIT=unit_name'
PS: We do this often to provision infrastructure for specific git branch name, and since all resources are identifiable and are located in separate tfstate files, we can safely destroy them when we don't need them.