Access Process-Specific Terraform TF_VAR value - terraform

I am attempting to grab a value within a Terraform/Python null_resource and make it available to other Terraform processes like so:
print_env.py
import os
# Create TF_VAR key value
param_tfvar = "TF_VAR_helloworld"
# Publish value through TF_VAR value
os.environ[param_tfvar] = "Hey Bob!"
print(param_tfvar, os.getenv(param_tfvar))
The Terraform *.tf files look like this:
variables.tf
variable "helloworld" {
description = "Display greeting"
type = string
default = ""
}
main.tf
resource "null_resource" "helloworld" {
provisioner "local-exec" {
command = <<EOC
python3 print_env.py
EOC
}
}
resource "null_resource" "echo_helloworld" {
provisioner "local-exec" {
command = "echo ${var.helloworld}"
}
depends_on = [
null_resource.helloworld
]
}
My issue is that the echo command does not display Hey Bob! but just a blank (the default value of helloworld defined in the variables.tf file.
I was looking at: https://www.terraform.io/language/values/variables#environment-variables on how to craft this solution.
Am I running into some type of scoping issue where the TF_VAR value published in the null_resource.helloworld block is not visible to the null_resource.echo_helloworld block?

Environment variables should be exported in the shell before you invoke terraform. Setting those at run-time like you are doing is not going to work.
Here's how I would solve your problem:
main.tf:
data "external" "get_greeting" {
program = ["python3", "get_env.py", var.greeting]
}
variable "greeting" {
description = "Display greeting"
type = string
default = ""
}
output "show_greeting" {
value = data.external.get_greeting.result.greeting
}
get_env.py:
import json
import sys
if len(sys.argv) > 1:
greeting = f"Hello {sys.argv[1]}"
else:
greeting = "Hello"
tf_dict = ({"greeting": greeting})
print(json.dumps(tf_dict))
And this is what you would see if you invoked it with no variable override:
$ terraform apply
outputs:
show_greeting = "Hello"
and with override:
$ terraform apply -var greeting="world"
outputs:
show_greeting = "Hello world"

Related

In Terraform how to use a condition to only run on certain nodes?

Terraform v1.2.8
I have a generic script that executes the passed-in shell script on my AWS remote EC2 instance that I've created also in Terraform.
resource "null_resource" "generic_script" {
connection {
type = "ssh"
user = "ubuntu"
private_key = file(var.ssh_key_file)
host = var.ec2_pub_ip
}
provisioner "file" {
source = "../modules/k8s_installer/${var.shell_script}"
destination = "/tmp/${var.shell_script}"
}
provisioner "remote-exec" {
inline = [
"sudo chmod u+x /tmp/${var.shell_script}",
"sudo /tmp/${var.shell_script}"
]
}
}
Now I want to be able to modify it so it runs on
all nodes
this node but not that node
that node but not this node
So I created variables in the variables.tf file
variable "run_on_THIS_node" {
type = boolean
description = "Run script on THIS node"
default = false
}
variable "run_on_THAT_node" {
type = boolean
description = "Run script on THAT node"
default = false
}
How can I put a condition to achieve what I want to do?
resource "null_resource" "generic_script" {
count = ???
...
}
You could use the ternary operator for this. For example, based on the defined variables, the condition would look like:
resource "null_resource" "generic_script" {
count = (var.run_on_THIS_node || var.run_on_THAT_node) ? 1 : length(var.all_nodes) # or var.number_of_nodes
...
}
The piece of the puzzle that is missing is the variable (or a number) that would tell the script to run on all the nodes. It does not have to be with length function, you could define it as a number only. However, this is only a part of the code you would have to add/edit, as there would have to be a way to control the host based on the index. That means that you probably would have to modify var.ec2_pub_ip so that it is a list.

How to add value(s) to an object from Terraform jsondecode without copying values over to another variable?

I have a following example of Terraform resources where I fetch values from secrets manager and pass them to the Lambda function. The question is how can I add extra values to an object before passing it to environment variable without replicating the values?
resource "aws_secretsmanager_secret" "example" {
name = "example"
}
resource "aws_secretsmanager_secret_version" "example" {
secret_id = aws_secretsmanager_secret.example.id
secret_string = <<EOF
{
"FOO": "bar"
}
EOF
}
data "aws_secretsmanager_secret_version" "example" {
secret_id = aws_secretsmanager_secret.example.id
depends_on = [aws_secretsmanager_secret_version.example]
}
locals {
original_secrets = jsondecode(
data.aws_secretsmanager_secret_version.example.secret_string
)
}
resource "aws_lambda_function" "example" {
...
environment {
variables = local.original_secrets
}
}
As a pseudo code I'd like to do something like this:
local.original_secrets["LOG_LEVEL"] = "debug"
The current approach I have is just to replicate the original values and add a new but of course this is not DRY.
locals {
...
updated_secrets = {
FOO = try(local.original_secrets.FOO, "")
DEBUG = "false"
}
}
You can use Terraform merge function to produce new combined map of environment variables.
lambda_environment_variables = merge(local.lambda_secrets, local.environment_variables)

Terraform, using yamldeode() in my locals returns the wrong string

I am creating an eventGrid topic that loops over local variables and they import from a yaml file through yamldecode function.
locals {
app_name = yamldecode(file("config.yaml"))["name"]
version = yamldecode(file("config.yaml"))["version"]
functions = yamldecode(file("config.yaml"))["functions"]
}
resource "azurerm_eventgrid_topic" "function" {
count = length(local.functions)
name = "topic-${local.functions[count.index]["name"]}"
resource_group_name = azurerm_resource_group.core.name
location = azurerm_resource_group.core.location
depends_on = [
null_resource.functions
]
}
The returned error shows that the name is wrong, which makes me assume that it is somehow not getting the input from the locals variabls, what am i doing wrong ?
error message:
│ Error: invalid value for name (EventGrid topic name must be 3 - 50 characters long, contain only letters, numbers and hyphens.)
│
│ with azurerm_eventgrid_topic.function[2],
│ on main.tf line 66, in resource "azurerm_eventgrid_topic" "function":
│ 66: resource "azurerm_eventgrid_topic" "function" {
This is how the yaml file looks like
any idea ?, appreciate your help
I tested your code and it works, you can try it yourself by running below:
locals {
app_name = yamldecode(file("config.yaml"))["name"]
version = yamldecode(file("config.yaml"))["version"]
functions = yamldecode(file("config.yaml"))["functions"]
}
resource "null_resource" "test" {
count = length(local.functions)
provisioner "local-exec" {
command = "echo topic-${local.functions[count.index]["name"]} >> test.out"
}
triggers = {
build_number = timestamp()
}
}
output "null" {
value = null_resource.test
}
So the issue is with the event grid topic name. I looked at Microsoft doc and there are limitations on topic names. It cannot have underscores it can have hyphens. https://learn.microsoft.com/en-us/azure/event-grid/troubleshoot-errors and I noticed you are using underscores in the name.

Terraform - multi-line JSON to single line?

I've created a JSON string via template/interpolation.
I need to pass that to local-exec, which in turn uses a Powershell template to make a CLI call.
Originally I tried just referencing the json template in the Powershell command itself
--cli-input-json file://lfsetup.tpl
.. however, the template does not get interpolated.
Next, I tried setting the json to a local. However, this is multi-line and the CLI does not like that. Maybe if I could convert to single line ?
Any sugestions or guidance welcome !!
Thanks
JSON (.tpl or variable)
{
"CatalogId": "${account_id}",
"DataLakeSettings": {
"DataLakeAdmins": [
{
"DataLakePrincipalIdentifier": "arn:aws:iam::${account_id}:role/Role1"
},
{
"DataLakePrincipalIdentifier": "arn:aws:iam::${account_id}:role/Role2"
}
],
"CreateDatabaseDefaultPermissions": [],
"CreateTableDefaultPermissions": []
}
}
.tf
locals {
assume_role_arn = "arn:aws:iam::${local.account_id}:role/role_to_assume"
lf_json_settings = templatefile("${path.module}/lfsetup.tpl", { account_id = local.account_id})
cli_region = "region"
}
resource "null_resource" "settings" {
provisioner "local-exec" {
command = templatefile("${path.module}/scripts/settings.ps1", { role_arn = local.assume_role_arn, json_settings = local.lf_json_settings, region = local.cli_region})
interpreter = ["pwsh", "-Command"]
}
}
.ps
$ErrorActionPreference = "Stop"
$json = aws sts assume-role --role-arn ${role_arn} --role-session-name sessionname
$accessTokens = ConvertFrom-Json (-join $json)
$env:AWS_ACCESS_KEY_ID = $accessTokens.Credentials.AccessKeyId
$env:AWS_SECRET_ACCESS_KEY = $accessTokens.Credentials.SecretAccessKey
$env:AWS_SESSION_TOKEN = $accessTokens.Credentials.SessionToken
aws lakeformation put-data-lake-settings --cli-input-json file://lfsetup.tpl --region ${region}
$env:AWS_ACCESS_KEY_ID = ""
$env:AWS_SECRET_ACCESS_KEY = ""
$env:AWS_SESSION_TOKEN = ""
Output:
For these I put the template output into a local and passed the local to powershell. Then did variations with/out jsonencde and trying to replace '\n'. Strange results in some cases:
Use file provisioner to create .json file from rendered .tpl file:
locals {
...
settings_json_file = "/tmp/lfsetup.json"
}
resource "null_resource" "settings" {
provisioner "file" {
content = templatefile("${path.module}/lfsetup.tpl", { account_id = local.account_id})
destination = local.settings_json_file
}
provisioner "local-exec" {
command = templatefile("${path.module}/scripts/settings.ps1", role_arn = local.assume_role_arn, json_settings = local.settings_json_file, region = local.cli_region})
interpreter = ["pwsh", "-Command"]
}
}
Update your .ps file
replace file://lfsetup.tpl by file://${json_settings}
aws lakeformation put-data-lake-settings --cli-input-json file://${json_settings} --region ${region}
You may also use jsonencode function

Retrieve the value of a provisioner command?

This is different from "Capture Terraform provisioner output?". I have a resource (a null_resource in this case) with a count and a local-exec provisioner that has some complex interpolated arguments:
resource "null_resource" "complex-provisioning" {
count = "${var.count}"
triggers {
server_triggers = "${null_resource.api-setup.*.id[count.index]}"
db_triggers = "${var.db_id}"
}
provisioner "local-exec" {
command = <<EOF
${var.init_command}
do-lots-of-stuff --target=${aws_instance.api.*.private_ip[count.index]} --bastion=${aws_instance.bastion.public_ip} --db=${var.db_name}
EOF
}
}
I want to be able to show what the provisioner did as output (this is not valid Terraform, just mock-up of what I want):
output "provisioner_commands" {
value = {
api_commands = "${null_resource.complex-provisioning.*.provisioner.0.command}"
}
}
My goal is to get some output like
provisioner_commands = {
api_commands = [
"do-lots-of-stuff --target=10.0.0.1 --bastion=77.2.4.34 --db=mydb.local",
"do-lots-of-stuff --target=10.0.0.2 --bastion=77.2.4.34 --db=mydb.local",
"do-lots-of-stuff --target=10.0.0.3 --bastion=77.2.4.34 --db=mydb.local",
]
}
Can I read provisioner configuration and output it like this? If not, is there a different way to get what I want? (If I didn't need to run over an array of resources, I would define the command in a local variable and reference it both in the provisioner and the output.)
You cannot grab the interpolated command from the local-exec provisioner block but if you put the same interpolation into the trigger, you can retrieve it in the output with a for expression in 0.12.x
resource "null_resource" "complex-provisioning" {
count = 2
triggers = {
command = "echo ${count.index}"
}
provisioner "local-exec" {
command = self.triggers.command
}
}
output "data" {
value = [
for trigger in null_resource.complex-provisioning.*.triggers:
trigger.command
]
}
$ terraform apply
null_resource.complex-provisioning[0]: Refreshing state... [id=9105930607760919878]
null_resource.complex-provisioning[1]: Refreshing state... [id=405391095459979423]
Apply complete! Resources: 0 added, 0 changed, 0 destroyed.
Outputs:
data = [
"echo 0",
"echo 1",
]

Resources