How to iterate resources in terraform? - terraform

How to iterate resources for different values of a parameter
For example, in my below terraform file I have one data block and one resource. If I pass value DB_NAME=test then its working fine. But what if I have multiple values of DB_NAME and I want it to run multiple time DB_NAME=test, app. How will I iterate over data and resource block? :
data "template_file" "search-index" {
template = "${file("search-index/search-index.sh")}"
vars {
DB_NAME = "${var.DB_NAME}"
}
}
resource "null_resource" "script" {
triggers = {
DB_NAME = "${var.DB_NAME}"
script_sha = "${sha256(file("search-index/search-index.sh"))}"
}
provisioner "local-exec" {
command = "${data.template_file.search-index.rendered}"
interpreter = ["/bin/bash", "-c"]
}
}

I'm not sure if I fully understood what you are trying to achieve but you can create multiple DB by defining a variables.tf like this:
variable "DB_NAME" {
description = "A list of databases"
type = list(string)
default = ["db1", "db2", "db3"]
}
And then using the for_each functionality in your terraform file:
data "template_file" "search-index" {
for_each = toset(var.DB_NAME)
template = "${file("search-index/search-index.sh")}"
vars = {
DB_NAME = each.value
}
}
resource "null_resource" "script" {
for_each = toset(var.DB_NAME)
triggers = {
DB_NAME = each.value
script_sha = "${sha256(file("search-index/search-index.sh"))}"
}
provisioner "local-exec" {
command = "${data.template_file.search-index[each.value]}"
interpreter = ["/bin/bash", "-c"]
}
}

Related

What to do if instance creating and cloud-config in separate session in terraform?

I was able to manually create this device instance in OpenStack, and now I am trying to see how to make it work by terraform.
Anyway, this device instance need to do a hard reboot after the volume attachment, and Any cloud-config needs be down after rebooting. Here is the general sketch of my current main.tf file.
# Configure the OpenStack Provider
terraform {
required_providers {
openstack = {
source = "terraform-provider-openstack/openstack"
}
}
}
data "template_file" "user_data"{
template = file("./userdata.yaml")
}
# create an instance
resource "openstack_compute_instance_v2" "server" {
name = "Device_Instance"
image_id = "xxx..."
image_name = "device_vmdk_1"
flavor_name = "m1.tiny"
key_pair = "my-keypair"
region = "RegionOne"
config_drive = true
network {
name = "main_network"
}
}
resource "openstack_blockstorage_volume_v2" "volume_2" {
name = "device_vmdk_2"
size = 1
image_id = "xxx...."
}
resource "openstack_blockstorage_volume_v3" "volume_3" {
name = "device_vmdk_3"
size = 1
image_id = "xxx..."
}
resource "openstack_compute_volume_attach_v2" "va_1" {
instance_id = "${openstack_compute_instance_v2.server.id}"
volume_id = "${openstack_blockstorage_volume_v2.volume_2.id}"
}
resource "openstack_compute_volume_attach_v2" "va_2" {
instance_id = "${openstack_compute_instance_v2.server.id}"
volume_id = "${openstack_blockstorage_volume_v3.volume_3.id}"
}
resource "null_resource" "reboot_instance" {
provisioner "local-exec" {
on_failure = fail
interpreter = ["/bin/bash", "-c"]
command = <<EOT
openstack server reboot --hard Device_Instance --insecure
echo "................"
EOT
}
depends_on = [openstack_compute_volume_attach_v2.va_1, openstack_compute_volume_attach_v2.va_2]
}
resource "openstack_compute_instance_v2" "server_config" {
name = "Device_Instance"
user_data = data.template_file.user_data.rendered
depends_on = [null_resource.reboot_instance]
}
As of now, it was able to:
have the "Device-Cloud-Instance" generated.
have the "Device-Cloud-Instance" hard-rebooted.
But it fails after rooting. As you may find, I have added this section in the end, but it seems not working.
resource "openstack_compute_instance_v2" "server_config"{}
Any ideas how to make it work?
Thanks,
Jack

How to add value(s) to an object from Terraform jsondecode without copying values over to another variable?

I have a following example of Terraform resources where I fetch values from secrets manager and pass them to the Lambda function. The question is how can I add extra values to an object before passing it to environment variable without replicating the values?
resource "aws_secretsmanager_secret" "example" {
name = "example"
}
resource "aws_secretsmanager_secret_version" "example" {
secret_id = aws_secretsmanager_secret.example.id
secret_string = <<EOF
{
"FOO": "bar"
}
EOF
}
data "aws_secretsmanager_secret_version" "example" {
secret_id = aws_secretsmanager_secret.example.id
depends_on = [aws_secretsmanager_secret_version.example]
}
locals {
original_secrets = jsondecode(
data.aws_secretsmanager_secret_version.example.secret_string
)
}
resource "aws_lambda_function" "example" {
...
environment {
variables = local.original_secrets
}
}
As a pseudo code I'd like to do something like this:
local.original_secrets["LOG_LEVEL"] = "debug"
The current approach I have is just to replicate the original values and add a new but of course this is not DRY.
locals {
...
updated_secrets = {
FOO = try(local.original_secrets.FOO, "")
DEBUG = "false"
}
}
You can use Terraform merge function to produce new combined map of environment variables.
lambda_environment_variables = merge(local.lambda_secrets, local.environment_variables)

Retrieve IP address from instances using for_each

I have this script which works great. It created 3 instances with the sepcified tags to identify them easily. But issue is i want to add a remote-exec provisioner (currently commented) to the code to install some packages. If i was using count, i could have looped over it to do the remote-exec over all the instances. I could not use count because i had to use for_each to loop over a local list. Since count and for_each cannot be used together, how do i loop over the instances to retrieve their IP addresses for using in the remote-exec provisioner.
On digital ocean and AWS, i was able to get it work using host = "${self.public_ip}"
But it does not work on vultr and gives the Unsupported attribute error
instance.tf
resource "vultr_ssh_key" "kubernetes" {
name = "kubernetes"
ssh_key = file("kubernetes.pub")
}
resource "vultr_instance" "kubernetes_instance" {
for_each = toset(local.expanded_names)
plan = "vc2-1c-2gb"
region = "sgp"
os_id = "387"
label = each.value
tag = each.value
hostname = each.value
enable_ipv6 = true
backups = "disabled"
ddos_protection = false
activation_email = false
ssh_key_ids = [vultr_ssh_key.kubernetes.id]
/* connection {
type = "ssh"
user = "root"
private_key = file("kubernetes")
timeout = "2m"
host = vultr_instance.kubernetes_instance[each.key].ipv4_address
}
provisioner "remote-exec" {
inline = "sudo hostnamectl set-hostname ${each.value}"
} */
}
locals {
expanded_names = flatten([
for name, count in var.host_name : [
for i in range(count) : format("%s-%02d", name, i + 1)
]
])
}
provider.tf
terraform {
required_providers {
vultr = {
source = "vultr/vultr"
version = "2.3.1"
}
}
}
provider "vultr" {
api_key = "***************************"
rate_limit = 700
retry_limit = 3
}
variables.tf
variable "host_name" {
type = map(number)
default = {
"Manager" = 1
"Worker" = 2
}
}
The property you are looking for is called main_ip instead of ip4_address or something like that. Specifically accessible via self.main_ip in your connection block.

Terraform - multi-line JSON to single line?

I've created a JSON string via template/interpolation.
I need to pass that to local-exec, which in turn uses a Powershell template to make a CLI call.
Originally I tried just referencing the json template in the Powershell command itself
--cli-input-json file://lfsetup.tpl
.. however, the template does not get interpolated.
Next, I tried setting the json to a local. However, this is multi-line and the CLI does not like that. Maybe if I could convert to single line ?
Any sugestions or guidance welcome !!
Thanks
JSON (.tpl or variable)
{
"CatalogId": "${account_id}",
"DataLakeSettings": {
"DataLakeAdmins": [
{
"DataLakePrincipalIdentifier": "arn:aws:iam::${account_id}:role/Role1"
},
{
"DataLakePrincipalIdentifier": "arn:aws:iam::${account_id}:role/Role2"
}
],
"CreateDatabaseDefaultPermissions": [],
"CreateTableDefaultPermissions": []
}
}
.tf
locals {
assume_role_arn = "arn:aws:iam::${local.account_id}:role/role_to_assume"
lf_json_settings = templatefile("${path.module}/lfsetup.tpl", { account_id = local.account_id})
cli_region = "region"
}
resource "null_resource" "settings" {
provisioner "local-exec" {
command = templatefile("${path.module}/scripts/settings.ps1", { role_arn = local.assume_role_arn, json_settings = local.lf_json_settings, region = local.cli_region})
interpreter = ["pwsh", "-Command"]
}
}
.ps
$ErrorActionPreference = "Stop"
$json = aws sts assume-role --role-arn ${role_arn} --role-session-name sessionname
$accessTokens = ConvertFrom-Json (-join $json)
$env:AWS_ACCESS_KEY_ID = $accessTokens.Credentials.AccessKeyId
$env:AWS_SECRET_ACCESS_KEY = $accessTokens.Credentials.SecretAccessKey
$env:AWS_SESSION_TOKEN = $accessTokens.Credentials.SessionToken
aws lakeformation put-data-lake-settings --cli-input-json file://lfsetup.tpl --region ${region}
$env:AWS_ACCESS_KEY_ID = ""
$env:AWS_SECRET_ACCESS_KEY = ""
$env:AWS_SESSION_TOKEN = ""
Output:
For these I put the template output into a local and passed the local to powershell. Then did variations with/out jsonencde and trying to replace '\n'. Strange results in some cases:
Use file provisioner to create .json file from rendered .tpl file:
locals {
...
settings_json_file = "/tmp/lfsetup.json"
}
resource "null_resource" "settings" {
provisioner "file" {
content = templatefile("${path.module}/lfsetup.tpl", { account_id = local.account_id})
destination = local.settings_json_file
}
provisioner "local-exec" {
command = templatefile("${path.module}/scripts/settings.ps1", role_arn = local.assume_role_arn, json_settings = local.settings_json_file, region = local.cli_region})
interpreter = ["pwsh", "-Command"]
}
}
Update your .ps file
replace file://lfsetup.tpl by file://${json_settings}
aws lakeformation put-data-lake-settings --cli-input-json file://${json_settings} --region ${region}
You may also use jsonencode function

Multiple user_data File use in Terraform

I am trying to have a common user_data file for common tasks such as folder creation and certain package install and a separate user_data file for application specific configuration
I am trying the below -
user_data = "${data.template_file.userdata_common.rendered}", "${data.template_file.userdata_master.rendered}"
With these configs -
Common User Data Template
data "template_file" "userdata_common" {
template = "${file("${path.module}/userdata_common.sh")}"
vars {
"ALBTarget" = "${var.ALBTarget}"
"s3bucket" = "${var.s3bucket}"
"centrifydomain" = "${lookup(var.centrifydomain, format("%s-%s", lower(var.env),var.region))}"
"centrifyadgroup" = "${lookup(var.centrifyadgroup, format("%s-%s", lower(var.env),var.region))}"
}
}
Application Specific Config
data "template_file" "userdata_master" {
template = "${file("${path.module}/userdata_master.sh")}"
vars {
"ALBTarget" = "${var.ALBTarget}"
"s3bucket" = "${var.s3bucket}"
"centrifydomain" = "${lookup(var.centrifydomain, format("%s-%s", lower(var.env),var.region))}"
"centrifyadgroup" = "${lookup(var.centrifyadgroup, format("%s-%s", lower(var.env),var.region))}"
}
}
I get the below Error when i do Plan -
Failed to load root config module: Error parsing /terraform/main.tf: key ${data.template_file.userdata_common.rendered}"' expected start of object ('{') or assignment ('=')
Is this possible using Terraform (0.9.3)?
If not, what's the best way to do this with Terraform?
Did you try template_cloudinit_config?
Add below codes.
data "template_cloudinit_config" "master" {
gzip = true
base64_encode = true
# get common user_data
part {
filename = "common.cfg"
content_type = "text/part-handler"
content = "${data.template_file.userdata_common.rendered}"
}
# get master user_data
part {
filename = "master.cfg"
content_type = "text/part-handler"
content = "${data.template_file.userdata_master.rendered}"
}
}
# sample code to use it.
resource "aws_instance" "web" {
ami = "ami-d05e75b8"
instance_type = "t2.micro"
user_data = "${data.template_cloudinit_config.master.rendered}"
}
Let me know if it works.
You can use "provisioner" to modify the infrastructure you are creating using the Terraform, Here is the example from them https://www.terraform.io/intro/getting-started/provision.html

Resources