how to make terraform error when data is empty - terraform

I have problem with terraform especially with google load balancer backend_service
here's my sample tf config
var backends {
default = [
{
'name':'red'
'service': 'abcd'
},
{
'name':'blue'
'service': 'efgh'
}
]
}
local {
backend = {for i in var.backends: "${i.name}-${i.service}" => i}
}
data "external" "mydata" {
for_each = local.backend
program = ["/bin/sh", "script.sh", each.value.name]
}
//that script will get some data like this //
{"id": "sometext"}
//
resource "google_compute_backend_service" "default" {
project = local.project_id
for_each = local.backend
name = each.key
dynamic "backend" {
for_each = {for zone in var.zone: zone => "projects/${local.project_id}/zones/${zone}/networkEndpointGroups/${data.external.mydata[each.key].result.id}"}
}
}
the problem is, if data.external.mydata[each.key].result.id is empty,the terraform plan is success/without error.
I want terraform plan to be error/failed in especially in google_compute_backend_service.
here's sample external.data if empty
{"id":""}
I want this to be error/failed in especially in google_compute_backend_service.
is it posible to reproduce error?
I know it can use count for checking the external.data value, but since I use for_each, so I cannot use count

Part of the expected protocol for programs you execute with the external data source is that they can indicate an error by printing an error message to stderr and then exiting with a non-zero exit status.
In a shell script you could do that via some commands like the following:
echo >&2 "No matching object is available."
exit 1
If your command exits unsuccessfully like this then the external provider will detect that and return a failure, which will then in turn prevent Terraform from processing any resources that depend on that result.

Related

In Terraform how to use a condition to only run on certain nodes?

Terraform v1.2.8
I have a generic script that executes the passed-in shell script on my AWS remote EC2 instance that I've created also in Terraform.
resource "null_resource" "generic_script" {
connection {
type = "ssh"
user = "ubuntu"
private_key = file(var.ssh_key_file)
host = var.ec2_pub_ip
}
provisioner "file" {
source = "../modules/k8s_installer/${var.shell_script}"
destination = "/tmp/${var.shell_script}"
}
provisioner "remote-exec" {
inline = [
"sudo chmod u+x /tmp/${var.shell_script}",
"sudo /tmp/${var.shell_script}"
]
}
}
Now I want to be able to modify it so it runs on
all nodes
this node but not that node
that node but not this node
So I created variables in the variables.tf file
variable "run_on_THIS_node" {
type = boolean
description = "Run script on THIS node"
default = false
}
variable "run_on_THAT_node" {
type = boolean
description = "Run script on THAT node"
default = false
}
How can I put a condition to achieve what I want to do?
resource "null_resource" "generic_script" {
count = ???
...
}
You could use the ternary operator for this. For example, based on the defined variables, the condition would look like:
resource "null_resource" "generic_script" {
count = (var.run_on_THIS_node || var.run_on_THAT_node) ? 1 : length(var.all_nodes) # or var.number_of_nodes
...
}
The piece of the puzzle that is missing is the variable (or a number) that would tell the script to run on all the nodes. It does not have to be with length function, you could define it as a number only. However, this is only a part of the code you would have to add/edit, as there would have to be a way to control the host based on the index. That means that you probably would have to modify var.ec2_pub_ip so that it is a list.

How to return function app keys from a terraform module

I am creating a webapp and a function. The web app calls the function.
my terraform structure is like this
main.tf
variable.tf
module/webapp
module/function
in the main.tf I call module/function to create the function and then I call module/webapp to create the webapp.
I need to provide the function key in the configuration for webpp.
Terraform azurerm provider 2.27.0 has added function keys as a data source.
https://github.com/terraform-providers/terraform-provider-azurerm/pull/7902
This is how it is described in terraform documentation.
https://www.terraform.io/docs/providers/azurerm/d/function_app_host_keys.html
data "azurerm_function_app_host_keys" "example" {
name = "example-function"
resource_group_name = azurerm_resource_group.example.name
}
How exactly do I return these keys to the main module? I tried the following but it returns the error that follows the code:
resource "azurerm_function_app" "myfunc" {
name = var.function_app
location = var.region
...
tags = var.tags
}
output "hostname" {
value = azurerm_function_app.actico.default_hostname
}
output "functionkeys" {
value = azurerm_function_app.actico.azurerm_function_app_host_keys
}
Error: unsupported attribute
This object has no argument, nested block, or exported attribute named
"azurerm_function_app_host_keys".
Another attempt appears more promising. In the main module added a data element, expecting that it will execute after the function has been created and fetch the key. But getting 400 Error.
in main module
data "azurerm_function_app_host_keys" "keymap" {
name = var.function_app_name
resource_group_name = var.resource_group_name
depends_on = [module.function_app]
}
Error making Read request on AzureRM Function App Hostkeys "FunctionApp": web.AppsClient#ListHostKeys: Failure responding to request:
StatusCode=400 -- Original Error: autorest/azure: Service returned an error. Status=400 Code="BadRequest" Message="Encountered an error (ServiceUnavailable) from host runtime." Details=[{"Message":"Encountered an error (ServiceUnavailable) from host runtime."},{"Code":"BadRequest"},{"ErrorEntity":{"Code":"BadRequest","Message":"Encountered an error
(ServiceUnavailable) from host runtime."}}]
Thanks,
Tauqir
I did some testing around this and there's two things. It looks like you need to deploy a function or restart the function app for it to generate the keys. If your deploying the function and then try to get the keys, it doesn't seem to wait. There's a delay between the function starting and the keys becoming available. Also there's issues with terraform around this. I had the issue with V12 see #26074.
I've gone back to using a module I wrote (bottom link), this waits for the key to become available.
https://github.com/hashicorp/terraform/issues/26074
https://github.com/eltimmo/terraform-azure-function-app-get-keys
What you're doing is correct from what I can gather, you will need to pass the values into the webapp module in your main.tf like so:
module webapp {
...
func_hostname = module.function.hostname
functionkeys = module.function.functionkeys
}
and have the variables set up in your webapp module
variable func_hostname {
type = string
}
variable functionkeys {
type = string
}
What I can see is that you're trying to return the azurerm_function_app_host_keys from the azurerm_function_app which does not exist.
Try returning the keys from the data source.
I was able to achieve this by using the lifecycle events of resources that are being created
depends_on
essentially letting terraform know that first create the resource group and the azure function before trying to pull the keys
resource "azurerm_resource_group" "rg" {
...
}
resource "azurerm_function_app" "app" {
...
}
# wait for the azure resource group and azure function are created.
# i believe that you can wait just for the function and this will work too
data "azurerm_function_app_host_keys" "hostkey" {
depends_on = [
azurerm_resource_group.rg,
azurerm_function_app.app
]
...
}

Terraform - Resource dependency on module

I have a Terraform module, which we'll call parent and a child module used inside of it that we'll refer to as child. The goal is to have the child module run the provisioner before the kubernetes_deployment resource is created. Basically, the child module builds and pushes a Docker image. If the image is not already present, the kubernetes_deployment will wait and eventually timeout because there's no image for the Deployment to use for creation of pods. I've tried everything I've been able to find online, output variables in the child module, using depends_on in the kubernetes_deployment resource, etc and have hit a wall. I would greatly appreciate any help!
parent.tf
module "child" {
source = ".\\child-module-path"
...
}
resource "kubernetes_deployment" "kub_deployment" {
...
}
child-module-path\child.tf
data "external" "hash_folder" {
program = ["powershell.exe", "${path.module}\\bin\\hash_folder.ps1"]
}
resource "null_resource" "build" {
triggers = {
md5 = data.external.hash_folder.result.md5
}
provisioner "local-exec" {
command = "${path.module}\\bin\\build.ps1 ${var.argument_example}"
interpreter = ["powershell.exe"]
}
}
Example Terraform error output:
module.parent.kubernetes_deployment.kub_deployment: Still creating... [10m0s elapsed]
Error output:
Error: Waiting for rollout to finish: 0 of 1 updated replicas are available...
In your child module, declare an output value that depends on the null resource that has the provisioner associated with it:
output "build_complete" {
# The actual value here doesn't really matter,
# as long as this output refers to the null_resource.
value = null_resource.build.triggers.md5
}
Then in your "parent" module, you can either make use of module.child.build_complete in an expression (if including the MD5 string in the deployment somewhere is useful), or you can just declare that the resource depends on the output.
resource "kubernetes_deployment" "example" {
depends_on = [module.child.build_complete]
...
}
Because the output depends on the null_resource and the kubernetes_deployment depends on the output, transitively the kubernetes_deployment now effectively depends on the null_resource, creating the ordering you wanted.

Efficient variable validation with Terraform

Is there an efficient way to apply validation logic to variables used in a terraform run?
Specifically I want to check the length and casing of some variables. The variables are a combination of ones declared in tfvars files, in variables.tf files, and collected during runtime by terraform.
Thanks.
Custom Validation Rules
Terraform document - Input Variables - Custom Validation Rules
Results
Failure case
provider aws {
profile="default"
}
terraform {
experiments = [variable_validation]
}
## Custom Validation Rules
variable "test" {
type = string
description = "Example to test the case and length of the variable"
default = "TEsT"
validation {
condition = length(var.test) > 4 && upper(var.test) == var.test
error_message = "Validation condition of the test variable did not meet."
}
}
Execution
$ terraform plan
Warning: Experimental feature "variable_validation" is active
on main.tf line 5, in terraform:
5: experiments = [variable_validation]
Experimental features are subject to breaking changes in future minor or patch
releases, based on feedback.
If you have feedback on the design of this feature, please open a GitHub issue
to discuss it.
Error: Invalid value for variable # <---------------------------
on main.tf line 9:
9: variable "test" {
Validation condition of the test variable did not meet.
This was checked by the validation rule at main.tf:14,3-13.
Pass case
terraform {
experiments = [variable_validation]
}
## Custom Validation Rules
variable "test" {
type = string
description = "Example to test the case and length of the variable"
default = "TESTED"
validation {
condition = length(var.test) > 4 && upper(var.test) == var.test
error_message = "Validation condition of the test variable did not meet."
}
}
Execution
$ terraform plan
Refreshing Terraform state in-memory prior to plan...
The refreshed state will be used to calculate this plan, but will not be
persisted to local or remote state storage.
------------------------------------------------------------------------
No changes. Infrastructure is up-to-date.
Others
Alternatively, use null_resource local-exec to implement logic in shell script, or use external provider to send the variable to an external program to validate?
This isn't something you can currently do directly with Terraform but I find it easier to just mangle the input variables to the required format if necessary.
As an example the aws_lb_target_group resource takes a protocol parameter that currently requires it to be uppercased instead of automatically upper casing things and suppressing the diff like the aws_lb_listener resource does for the protocol (or even the protocol in the health_check block).
To solve this I just use the upper function when creating the resource:
variable "protocol" {
default = "http"
}
resource "aws_vpc" "main" {
cidr_block = "10.0.0.0/16"
}
resource "aws_lb_target_group" "test" {
name = "tf-example-lb-tg"
port = 80
protocol = "${upper(var.protocol)}"
vpc_id = "${aws_vpc.main.id}"
}
As for checking length I just substring things to make them the right length. I currently do this for ALBs as the name has a max length of 32 and I have Gitlab CI create review environments for some services that get a name based on the slug of the Git branch name so have little control over the length that is used.
variable "environment" {}
variable "service_name" {}
variable "internal" {
default = true
}
resource "aws_lb" "load_balancer" {
name = "${substr(var.environment, 0, min(length(var.environment), 27 - length(var.service_name)))}-${var.service_name}-${var.internal ? "int" : "ext"}"
internal = "${var.internal}"
security_groups = ["${aws_security_group.load_balancer.id}"]
subnets = ["${data.aws_subnet_ids.selected.ids}"]
}
With the above then any combination of length of environment or service name will lead to the environment/service name pair being trimmed to 27 characters at most which leaves room for the extra characters that I want to specify.
Got inspired by this conversation and found the following already existing provider:
https://github.com/craigmonson/terraform-provider-validate

How to define execution order without trigger option in terraform

Does it make sense to understand that it runs in the order defined in main.tf of terraform?
I understand that it is necessary to describe the trigger option in order to define the order on terraform.
but if it could not be used trigger option like this data "external" , How can I define the execution order?
For example, I would like to run in order as follows.
get_my_public_ip -> ec2 -> db -> test_http_status
main.tf is as follows
data "external" "get_my_public_ip" {
program = ["sh", "scripts/get_my_public_ip.sh"]
}
module "ec2" {
...
}
module "db" {
...
}
data "external" "test_http_status" {
program = ["sh", "scripts/test_http_status.sh"]
}
I can only provide feedback on the code you provided but one way to ensure the test_status command is run once the DB is ready is to use a depends_on within a null_resource
resource "null_resource" "test_status" {
depends_on = ["module.db.id"] #or any output variable
provisioner "local-exec" {
command = "scripts/test_http_status.sh"
}
}
But as #JimB already mentioned terraform isn't procedural so ensuring order isn't possible.

Resources