Can terraforms null_resource use timeouts blocks? - terraform

I have a terraform null_resource that looks like the following
resource "null_resource" "foo" {
provisioner "local-exec" {
command = "foo.sh"
}
}
So what I would like to know is I can use timeouts with the resource as in the following
resource "null_resource" "foo" {
provisioner "local-exec" {
command = "foo.sh"
}
timeouts {
create = "60m"
delete = "2h"
}
}

I believe the null provider doesn't support timeout operations for it's resources.
However, there should be a way to model this using destroy-time provisioners.
resource "null_resource" "foo" {
provisioner "local-exec" {
command = "timeout 60m foo.sh"
}
provisioner "local-exec" {
command = "timeout 2h foo.sh"
when = "destroy"
}
}

Related

Conditionally triggering of Terraform local_exec provisioner based on local_file changes

I'm using terraform 0.14 and have 2 resources, one is a local_file that creates a file on the local machine based on a variable and the other is a null_resource with a local_exec provisioner.
This all works as intended but I can only get it to either always run the provisioner (using an always-changing trigger, like timestamp()) or only run it once. Now I'd like to get it to run every time (and only when) the local_file actually changes.
Does anybody know how I can set a trigger that changes when the local_file content has changed? e.g. a last-updated-timestamp or maybe a checksum value?
resource "local_file" "foo" {
content = var.foobar
filename = "/tmp/foobar.txt"
}
resource "null_resource" "null" {
triggers = {
always_run = timestamp() # this will always run
}
provisioner "local-exec" {
command = "/tmp/somescript.py"
}
}
You can try using file hash to indicate its change:
resource "null_resource" "null" {
triggers = {
file_changed = md5(local_file.foo.content)
}
provisioner "local-exec" {
command = "/tmp/somescript.py"
}
}

terraform local-exec ausführen

I would like to perform the following scenario in Terraform:
resource "aws_ecr_repository" "jenkins" {
name = var.image_name
provisioner "local-exec" {
command = "./deploy-image.sh ${self.repository_url} ${var.image_name}"
}
}
However, it is not executed. Does anyone have an idea what could be?
I had to add a working directory
resource "null_resource" "backend_image" {
triggers = {
build_trigger = var.build_trigger
}
provisioner "local-exec" {
command = "./deploy-image.sh ${var.region} ${var.image_name} ${var.ecr_repository}"
interpreter = ["bash", "-c"]
working_dir = "${path.cwd}/${path.module}"
}
}
Now it works.

provisioner local-exec access to each.key

Terraform v0.12.6, provider.aws v2.23.0
Am trying to create two aws instances using the new for_each construct. Don't actually think this is an aws provider issue, more a terraform/for_each/provisioner issues.
Worked as advertised until I tried to add a local-exec provisioning step.
Don't know how to modify the local-exec example to work with the for.each variable. Am getting a terraform error about a cycle.
locals {
instances = {
s1 = {
private_ip = "192.168.47.191"
},
s2 = {
private_ip = "192.168.47.192"
},
}
}
provider "aws" {
profile = "default"
region = "us-east-1"
}
resource "aws_instance" "example" {
for_each = local.instances
ami = "ami-032138b8a0ee244c9"
instance_type = "t2.micro"
availability_zone = "us-east-1c"
private_ip = each.value["private_ip"]
ebs_block_device {
device_name = "/dev/sda1"
volume_size = 2
}
provisioner "local-exec" {
command = "echo ${aws_instance.example[each.key].public_ip} >> ip_address.txt"
}
}
But get this error.
./terraform apply
Error: Cycle: aws_instance.example["s2"], aws_instance.example["s1"]
Should the for_each each.key variable be expected to work in a provisioning step? There are other ways to get the public_ip later, by either using the testate file or querying aws given the instance ids, but accessing the resource variables within the local-exec provisioning would seem to come in handy in many ways.
Try using the self variable:
provisioner "local-exec" {
command = "echo ${self.public_ip} >> ip_address.txt"
}
Note to readers that resource-level for_each is a relatively new feature in Terraform and requires version >=0.12.6.

Run Destroy-Time Provisioner before local_file is deleted

I have a Terraform script which creates a config.json file and then runs a command that uses that config.json:
resource "local_file" "config" {
# Output vars to config
filename = "config.json"
content = "..."
# Deploy using config
provisioner "local-exec" {
command = "deploy"
}
}
This all works great, but when I run terraform destroy I'd like to run a different command - I tried to do this with a destroy-time provisioner in a null_resource by adding the following:
resource "null_resource" "test" {
provisioner "local-exec" {
when = "destroy"
command = "delete"
}
}
The script is run, but it runs after the config file is deleted - it errors, because it needs that config file to exist for it to know what to delete.
How would I fix this?
Thanks!
I moved the destroy time provisioner into the original resource, and it worked great:
resource "local_file" "config" {
# Output vars to config
filename = "config.json"
content = "..."
# Deploy using config
provisioner "local-exec" {
command = "deploy"
}
# Delete on_destroy
provisioner "local-exec" {
when = "destroy"
command = "delete"
}
}

Retrieve the value of a provisioner command?

This is different from "Capture Terraform provisioner output?". I have a resource (a null_resource in this case) with a count and a local-exec provisioner that has some complex interpolated arguments:
resource "null_resource" "complex-provisioning" {
count = "${var.count}"
triggers {
server_triggers = "${null_resource.api-setup.*.id[count.index]}"
db_triggers = "${var.db_id}"
}
provisioner "local-exec" {
command = <<EOF
${var.init_command}
do-lots-of-stuff --target=${aws_instance.api.*.private_ip[count.index]} --bastion=${aws_instance.bastion.public_ip} --db=${var.db_name}
EOF
}
}
I want to be able to show what the provisioner did as output (this is not valid Terraform, just mock-up of what I want):
output "provisioner_commands" {
value = {
api_commands = "${null_resource.complex-provisioning.*.provisioner.0.command}"
}
}
My goal is to get some output like
provisioner_commands = {
api_commands = [
"do-lots-of-stuff --target=10.0.0.1 --bastion=77.2.4.34 --db=mydb.local",
"do-lots-of-stuff --target=10.0.0.2 --bastion=77.2.4.34 --db=mydb.local",
"do-lots-of-stuff --target=10.0.0.3 --bastion=77.2.4.34 --db=mydb.local",
]
}
Can I read provisioner configuration and output it like this? If not, is there a different way to get what I want? (If I didn't need to run over an array of resources, I would define the command in a local variable and reference it both in the provisioner and the output.)
You cannot grab the interpolated command from the local-exec provisioner block but if you put the same interpolation into the trigger, you can retrieve it in the output with a for expression in 0.12.x
resource "null_resource" "complex-provisioning" {
count = 2
triggers = {
command = "echo ${count.index}"
}
provisioner "local-exec" {
command = self.triggers.command
}
}
output "data" {
value = [
for trigger in null_resource.complex-provisioning.*.triggers:
trigger.command
]
}
$ terraform apply
null_resource.complex-provisioning[0]: Refreshing state... [id=9105930607760919878]
null_resource.complex-provisioning[1]: Refreshing state... [id=405391095459979423]
Apply complete! Resources: 0 added, 0 changed, 0 destroyed.
Outputs:
data = [
"echo 0",
"echo 1",
]

Resources