How to define execution order without trigger option in terraform - terraform

Does it make sense to understand that it runs in the order defined in main.tf of terraform?
I understand that it is necessary to describe the trigger option in order to define the order on terraform.
but if it could not be used trigger option like this data "external" , How can I define the execution order?
For example, I would like to run in order as follows.
get_my_public_ip -> ec2 -> db -> test_http_status
main.tf is as follows
data "external" "get_my_public_ip" {
program = ["sh", "scripts/get_my_public_ip.sh"]
}
module "ec2" {
...
}
module "db" {
...
}
data "external" "test_http_status" {
program = ["sh", "scripts/test_http_status.sh"]
}

I can only provide feedback on the code you provided but one way to ensure the test_status command is run once the DB is ready is to use a depends_on within a null_resource
resource "null_resource" "test_status" {
depends_on = ["module.db.id"] #or any output variable
provisioner "local-exec" {
command = "scripts/test_http_status.sh"
}
}
But as #JimB already mentioned terraform isn't procedural so ensuring order isn't possible.

Related

In Terraform how to use a condition to only run on certain nodes?

Terraform v1.2.8
I have a generic script that executes the passed-in shell script on my AWS remote EC2 instance that I've created also in Terraform.
resource "null_resource" "generic_script" {
connection {
type = "ssh"
user = "ubuntu"
private_key = file(var.ssh_key_file)
host = var.ec2_pub_ip
}
provisioner "file" {
source = "../modules/k8s_installer/${var.shell_script}"
destination = "/tmp/${var.shell_script}"
}
provisioner "remote-exec" {
inline = [
"sudo chmod u+x /tmp/${var.shell_script}",
"sudo /tmp/${var.shell_script}"
]
}
}
Now I want to be able to modify it so it runs on
all nodes
this node but not that node
that node but not this node
So I created variables in the variables.tf file
variable "run_on_THIS_node" {
type = boolean
description = "Run script on THIS node"
default = false
}
variable "run_on_THAT_node" {
type = boolean
description = "Run script on THAT node"
default = false
}
How can I put a condition to achieve what I want to do?
resource "null_resource" "generic_script" {
count = ???
...
}
You could use the ternary operator for this. For example, based on the defined variables, the condition would look like:
resource "null_resource" "generic_script" {
count = (var.run_on_THIS_node || var.run_on_THAT_node) ? 1 : length(var.all_nodes) # or var.number_of_nodes
...
}
The piece of the puzzle that is missing is the variable (or a number) that would tell the script to run on all the nodes. It does not have to be with length function, you could define it as a number only. However, this is only a part of the code you would have to add/edit, as there would have to be a way to control the host based on the index. That means that you probably would have to modify var.ec2_pub_ip so that it is a list.

Conditionally triggering of Terraform local_exec provisioner based on local_file changes

I'm using terraform 0.14 and have 2 resources, one is a local_file that creates a file on the local machine based on a variable and the other is a null_resource with a local_exec provisioner.
This all works as intended but I can only get it to either always run the provisioner (using an always-changing trigger, like timestamp()) or only run it once. Now I'd like to get it to run every time (and only when) the local_file actually changes.
Does anybody know how I can set a trigger that changes when the local_file content has changed? e.g. a last-updated-timestamp or maybe a checksum value?
resource "local_file" "foo" {
content = var.foobar
filename = "/tmp/foobar.txt"
}
resource "null_resource" "null" {
triggers = {
always_run = timestamp() # this will always run
}
provisioner "local-exec" {
command = "/tmp/somescript.py"
}
}
You can try using file hash to indicate its change:
resource "null_resource" "null" {
triggers = {
file_changed = md5(local_file.foo.content)
}
provisioner "local-exec" {
command = "/tmp/somescript.py"
}
}

Terraform : depends_on argument not creating the specified resource first

I want to push the terraform state file to a github repo. The file function in Terraform fails to read .tfstate files, so I need to change their extension to .txt first. Now to automate it, I created a null resource which has a provisioner to run the command to copy the tfstate file as a txt file in the same directory. I came across this 'depends_on' argument which lets you specify if a particular resource needs to be made first before running the current. However, it is not working and I am straight away getting the error that 'terraform.txt' file doesn't exit when the file function demands it.
provider "github" {
token = "TOKEN"
owner = "USERNAME"
}
resource "null_resource" "tfstate_to_txt" {
provisioner "local-exec" {
command = "copy terraform.tfstate terraform.txt"
}
}
resource "github_repository_file" "state_push" {
repository = "TerraformStates"
file = "terraform.tfstate"
content = file("terraform.txt")
depends_on = [null_resource.tfstate_to_txt]
}
The documentation for the file function explains this behavior:
This function can be used only with files that already exist on disk at the beginning of a Terraform run. Functions do not participate in the dependency graph, so this function cannot be used with files that are generated dynamically during a Terraform operation. We do not recommend using dynamic local files in Terraform configurations, but in rare situations where this is necessary you can use the local_file data source to read files while respecting resource dependencies.
This paragraph also includes a suggestion for how to get the result you wanted: use the local_file data source, from the hashicorp/local provider, to read the file as a resource operation (during the apply phase) rather than as part of configuration loading:
resource "null_resource" "tfstate_to_txt" {
triggers = {
source_file = "terraform.tfstate"
dest_file = "terraform.txt"
}
provisioner "local-exec" {
command = "copy ${self.triggers.source_file} ${self.triggers.dest_file}"
}
}
data "local_file" "state" {
filename = null_resource.tfstate_to_txt.triggers.dest_file
}
resource "github_repository_file" "state_push" {
repository = "TerraformStates"
file = "terraform.tfstate"
content = data.local_file.state.content
}
Please note that although the above should get the order of operations you were asking about, reading the terraform.tfstate file while Terraform running is a very unusual thing to do, and is likely to result in undefined behavior because Terraform can repeatedly update that file at unpredictable moments throughout terraform apply.
If your intent is to have Terraform keep the state in a remote system rather than on local disk, the usual way to achieve that is to configure remote state, which will then cause Terraform to keep the state only remotely, and not use the local terraform.tfstate file at all.
depends_on does not really work with null_resource.provisioner.
here's a workaround that can help you :
resource "null_resource" "tfstate_to_txt" {
provisioner "local-exec" {
command = "copy terraform.tfstate terraform.txt"
}
}
resource "null_resource" "delay" {
provisioner "local-exec" {
command = "sleep 20"
}
triggers = {
"before" = null_resource.tfstate_to_txt.id
}
}
resource "github_repository_file" "state_push" {
repository = "TerraformStates"
file = "terraform.tfstate"
content = file("terraform.txt")
depends_on = ["null_resource.delay"]
}
the delay null resource will make sure the resource 2 runs after the first if the copy command takes more time just change the sleep to higher number

How to run a null_resource in terraform at the start of the script

I have a use case where I am taking all variables from locals in terraform as shown below, but before that, I want to run a null_resource block which will run a python script and update all the data into the local's file.
So my use case in simple words is to execute a null_resource block at the start of the terraform script and then run all the other resource blocks
My current code sample is as follows:
// executing script for populating data in app_config.json
resource "null_resource" "populate_data" {
provisioner "local-exec" {
command = "python3 scripts/data_populate.py"
}
}
// reading data variables from app_config.json file
locals {
config_data = jsondecode(file("${path.module}/app_config.json"))
}
How do I achieve that? All I have tried is adding a triggers command inside locals as follows but even that did not work.
locals {
triggers = {
order = null_resource.populate_data.id
}
config_data = jsondecode(file("${path.module}/app_config.json"))
}
You can use depends_on
resource "null_resource" "populate_data" {
provisioner "local-exec" {
command = "python3 scripts/data_populate.py"
}
}
// reading data variables from app_config.json file
locals {
depends_on = [null_resource.populate_data]
config_data = jsondecode(file("${path.module}/app_config.json"))
}
Now locals will get executed after populate_data always.

Terraform - Resource dependency on module

I have a Terraform module, which we'll call parent and a child module used inside of it that we'll refer to as child. The goal is to have the child module run the provisioner before the kubernetes_deployment resource is created. Basically, the child module builds and pushes a Docker image. If the image is not already present, the kubernetes_deployment will wait and eventually timeout because there's no image for the Deployment to use for creation of pods. I've tried everything I've been able to find online, output variables in the child module, using depends_on in the kubernetes_deployment resource, etc and have hit a wall. I would greatly appreciate any help!
parent.tf
module "child" {
source = ".\\child-module-path"
...
}
resource "kubernetes_deployment" "kub_deployment" {
...
}
child-module-path\child.tf
data "external" "hash_folder" {
program = ["powershell.exe", "${path.module}\\bin\\hash_folder.ps1"]
}
resource "null_resource" "build" {
triggers = {
md5 = data.external.hash_folder.result.md5
}
provisioner "local-exec" {
command = "${path.module}\\bin\\build.ps1 ${var.argument_example}"
interpreter = ["powershell.exe"]
}
}
Example Terraform error output:
module.parent.kubernetes_deployment.kub_deployment: Still creating... [10m0s elapsed]
Error output:
Error: Waiting for rollout to finish: 0 of 1 updated replicas are available...
In your child module, declare an output value that depends on the null resource that has the provisioner associated with it:
output "build_complete" {
# The actual value here doesn't really matter,
# as long as this output refers to the null_resource.
value = null_resource.build.triggers.md5
}
Then in your "parent" module, you can either make use of module.child.build_complete in an expression (if including the MD5 string in the deployment somewhere is useful), or you can just declare that the resource depends on the output.
resource "kubernetes_deployment" "example" {
depends_on = [module.child.build_complete]
...
}
Because the output depends on the null_resource and the kubernetes_deployment depends on the output, transitively the kubernetes_deployment now effectively depends on the null_resource, creating the ordering you wanted.

Resources