Run Destroy-Time Provisioner before local_file is deleted - terraform

I have a Terraform script which creates a config.json file and then runs a command that uses that config.json:
resource "local_file" "config" {
# Output vars to config
filename = "config.json"
content = "..."
# Deploy using config
provisioner "local-exec" {
command = "deploy"
}
}
This all works great, but when I run terraform destroy I'd like to run a different command - I tried to do this with a destroy-time provisioner in a null_resource by adding the following:
resource "null_resource" "test" {
provisioner "local-exec" {
when = "destroy"
command = "delete"
}
}
The script is run, but it runs after the config file is deleted - it errors, because it needs that config file to exist for it to know what to delete.
How would I fix this?
Thanks!

I moved the destroy time provisioner into the original resource, and it worked great:
resource "local_file" "config" {
# Output vars to config
filename = "config.json"
content = "..."
# Deploy using config
provisioner "local-exec" {
command = "deploy"
}
# Delete on_destroy
provisioner "local-exec" {
when = "destroy"
command = "delete"
}
}

Related

Send and overwrote old file with terraform on VPS

I am trying to create a script with terraform that sends a file contained in a folder and overwrites the old file on the server if it has changed since.
I succeeded in sending the file on the server, however when I do the command: "terraform plan" after having modified my file it tells me that my configuration has not changed, and I don't want to have to modify an environment variable,
I would like it to be done automatically, has anyone ever had to deal with this situation?
My try #1:
resource "null_resource" "example" {
provisioner "file" {
source = "${path.cwd}/conf/file"
destination = "/home/user/conf/file"
connection {
host = "IP"
user = "user"
private_key = file("C:\\Users\\user\\.ssh\\sshTerraformDeployment")
}
}
}
Try #2:
resource "null_resource" "example" {
provisioner "remote-exec" {
inline = [
"set -e",
"cd /home/user/conf",
"rsync --ignore-existing --checksum -avz -e 'ssh -i /root/.ssh/sshTerraformDeployment' ${path.cwd}/conf root#host:/home/user/conf"
]
connection {
host = "IP"
user = "user"
private_key = file("C:\\Users\\user\\.ssh\\sshTerraformDeployment")
}
}
}

Conditionally triggering of Terraform local_exec provisioner based on local_file changes

I'm using terraform 0.14 and have 2 resources, one is a local_file that creates a file on the local machine based on a variable and the other is a null_resource with a local_exec provisioner.
This all works as intended but I can only get it to either always run the provisioner (using an always-changing trigger, like timestamp()) or only run it once. Now I'd like to get it to run every time (and only when) the local_file actually changes.
Does anybody know how I can set a trigger that changes when the local_file content has changed? e.g. a last-updated-timestamp or maybe a checksum value?
resource "local_file" "foo" {
content = var.foobar
filename = "/tmp/foobar.txt"
}
resource "null_resource" "null" {
triggers = {
always_run = timestamp() # this will always run
}
provisioner "local-exec" {
command = "/tmp/somescript.py"
}
}
You can try using file hash to indicate its change:
resource "null_resource" "null" {
triggers = {
file_changed = md5(local_file.foo.content)
}
provisioner "local-exec" {
command = "/tmp/somescript.py"
}
}

How to run a null_resource in terraform at the start of the script

I have a use case where I am taking all variables from locals in terraform as shown below, but before that, I want to run a null_resource block which will run a python script and update all the data into the local's file.
So my use case in simple words is to execute a null_resource block at the start of the terraform script and then run all the other resource blocks
My current code sample is as follows:
// executing script for populating data in app_config.json
resource "null_resource" "populate_data" {
provisioner "local-exec" {
command = "python3 scripts/data_populate.py"
}
}
// reading data variables from app_config.json file
locals {
config_data = jsondecode(file("${path.module}/app_config.json"))
}
How do I achieve that? All I have tried is adding a triggers command inside locals as follows but even that did not work.
locals {
triggers = {
order = null_resource.populate_data.id
}
config_data = jsondecode(file("${path.module}/app_config.json"))
}
You can use depends_on
resource "null_resource" "populate_data" {
provisioner "local-exec" {
command = "python3 scripts/data_populate.py"
}
}
// reading data variables from app_config.json file
locals {
depends_on = [null_resource.populate_data]
config_data = jsondecode(file("${path.module}/app_config.json"))
}
Now locals will get executed after populate_data always.

terraform local-exec ausführen

I would like to perform the following scenario in Terraform:
resource "aws_ecr_repository" "jenkins" {
name = var.image_name
provisioner "local-exec" {
command = "./deploy-image.sh ${self.repository_url} ${var.image_name}"
}
}
However, it is not executed. Does anyone have an idea what could be?
I had to add a working directory
resource "null_resource" "backend_image" {
triggers = {
build_trigger = var.build_trigger
}
provisioner "local-exec" {
command = "./deploy-image.sh ${var.region} ${var.image_name} ${var.ecr_repository}"
interpreter = ["bash", "-c"]
working_dir = "${path.cwd}/${path.module}"
}
}
Now it works.

Dependency on local file creation

I am setting up an EKS cluster with Terraform following the example https://github.com/terraform-aws-modules/terraform-aws-eks/blob/master/aws_auth.tf and I now have two Terraform files:
kubeconfig.tf
resource "local_file" "kubeconfig" {
content = "${data.template_file.kubeconfig.rendered}"
filename = "tmp/kubeconfig"
}
data "template_file" "kubeconfig" {
template = "${file("template/kubeconfig.tpl")}"
...
}
aws-auth.tf
resource "null_resource" "update_config_map_aws_auth" {
provisioner "local-exec" {
command = "kubectl apply -f tmp/config-map-aws-auth_${var.cluster-name}.yaml --kubeconfig /tmp/kubeconfig"
}
...
}
When I run this the local-exec command fails with
Output: error: stat tmp/kubeconfig: no such file or directory
On a second run it succeeds. I think that the file is created after local-exec tries to use it and local-exec should depend on the file resource. So I try to express the dependency by using interpolation (implicit dependency) like this:
resource "null_resource" "update_config_map_aws_auth" {
provisioner "local-exec" {
command = "kubectl apply -f tmp/config-map-aws-auth_${var.cluster-name}.yaml --kubeconfig ${resource.local_file.kubeconfig.filename}"
}
But this always gives me
Error: resource 'null_resource.update_config_map_aws_auth' provisioner
local-exec (#1): unknown resource 'resource.local_file' referenced in
variable resource.local_file.kubeconfig.filename
You don't need the resource. part when using the interpolation in the last code block.
When Terraform first started it just had resources so you don't need to say that something's a resource as that was the only case. They then added modules and data sources which required some differentiation in the naming so these get module. and data. so Terraform can tell resources and data sources etc apart.
So you probably want something like this:
resource "local_file" "kubeconfig" {
content = "${data.template_file.kubeconfig.rendered}"
filename = "tmp/kubeconfig"
}
data "template_file" "kubeconfig" {
template = "${file("template/kubeconfig.tpl")}"
...
}
resource "null_resource" "update_config_map_aws_auth" {
provisioner "local-exec" {
command = "kubectl apply -f tmp/config-map-aws-auth_${var.cluster-name}.yaml --kubeconfig ${local_file.kubeconfig.filename}"
}
}

Resources