Terraform provisioner module doesn't show up in the execution plan - terraform

I have included a Terraform module i.e. "null resource" which runs a command to "sleep 200" dependent on the previous module finishing execution. For some reason I don't see provisioner module when I run Terraform plan. What could be the reason for that ? Below is the main.tf terraform file:
resource "helm_release" "istio-init" {
name = "istio-init"
repository = "${data.helm_repository.istio.metadata.0.name}"
chart = "istio-init"
version = "${var.istio_version}"
namespace = "${var.istio_namespace}"
}
resource "null_resource" "delay" {
provisioner "local-exec" {
command = "sleep 200"
}
depends_on = ["helm_release.istio-init"]
}
resource "helm_release" "istio" {
name = "istio"
repository = "${data.helm_repository.istio.metadata.0.name}"
chart = "istio"
version = "${var.istio_version}"
namespace = "${var.istio_namespace}"
}

Provisioners are a bit different than resources in terraform. They are something that is either triggered on creation of a resource or destruction. No information about them is stored in the state and that is why adding/modifying/removing a provisioner on an already created resource will have no effect on your plan or resource. The plan is a detailed output to how the state will change. They are only for time of creation/destruction. When you run your apply you will still observe your sleep in action because your null_resource will be created. I would reference the terraform docs on this for more details.
Provisioners

Related

Conditionally triggering of Terraform local_exec provisioner based on local_file changes

I'm using terraform 0.14 and have 2 resources, one is a local_file that creates a file on the local machine based on a variable and the other is a null_resource with a local_exec provisioner.
This all works as intended but I can only get it to either always run the provisioner (using an always-changing trigger, like timestamp()) or only run it once. Now I'd like to get it to run every time (and only when) the local_file actually changes.
Does anybody know how I can set a trigger that changes when the local_file content has changed? e.g. a last-updated-timestamp or maybe a checksum value?
resource "local_file" "foo" {
content = var.foobar
filename = "/tmp/foobar.txt"
}
resource "null_resource" "null" {
triggers = {
always_run = timestamp() # this will always run
}
provisioner "local-exec" {
command = "/tmp/somescript.py"
}
}
You can try using file hash to indicate its change:
resource "null_resource" "null" {
triggers = {
file_changed = md5(local_file.foo.content)
}
provisioner "local-exec" {
command = "/tmp/somescript.py"
}
}

Terraform - how To make the 'local_file' resource to be recreated with every 'terraform apply'

I have a local_file resource in my terraform configuration,
the problem is how to tell terraform that i want this resource to be recreated every time my client will run 'terraform apply' even if nothing changed in the resource itself, how can i make this possible?
local_file resource cant do what i want
I am using triggers to do so in a null_resource but there is no such option in local_file resource.
null_resource does what i want
Instead of local provider try template provider and create the file using null resource.So that trigger will take care of recreating the file.Tested like below
template = file("${path.module}/templates/inventory.tpl")
vars = {
bastion_host = local.Mongo_Bastion_host
key_file = var.private_key_file_path
}
}
resource "null_resource" "copy-inventory" {
triggers = {
ip = local.random_id
}
provisioner "local-exec" {
command = "echo ${data.template_file.inventory_cfg.rendered} >>inventory"
}
}
After terraform apply i reran terraform plan and able to see it is creating thefile again
Terraform will perform the following actions:
# null_resource.copy-inventory must be replaced
-/+ resource "null_resource" "copy-inventory" {
~ id = "7011189362963809116" -> (known after apply)
~ triggers = {
- "ip" = "f9f0f771-42ae-176e-3722-5b342665dea2"
} -> (known after apply) # forces replacement
}
Plan: 1 to add, 0 to change, 1 to destroy.

Cannot destroy one module using Terraform destroy

I have created a few instances using Terraform module:
resource "google_compute_instance" "cluster" {
count = var.num_instances
name = "redis-${format("%03d", count.index)}"
...
attached_disk {
source =
google_compute_disk.ssd[count.index].name
}
}
resource "google_compute_disk" "ssd" {
count = var.num_instances
name = "redis-ssd-${format("%03d", count.index)}"
...
zone = data.google_compute_zones.available.names[count.index % length(data.google_compute_zones.available.names)]
}
resource "google_dns_record_set" "dns" {
count = var.num_instances
name = "${var.dns_name}-${format("%03d",
count.index +)}.something.com"
...
managed_zone = XXX
rrdatas = [google_compute_instance.cluster[count.index].network_interface.0.network_ip]
}
module "test" {
source = "/modules_folder"
num_instances = 2
...
}
How can I destroy one of the instances and its dependency, say instance[1]+ssd[1]+dns[1]? I tried to destroy only one module using
terraform destroy -target module.test.google_compute_instance.cluster[1]
but it does not destroy ssd[1] and it tried to destroy both dns:
module.test.google_dns_record_set.dns[0]
module.test.google_dns_record_set.dns[1]
if I run
terraform destroy -target module.test.google_compute_disk.ssd[1]
it tried to destroy both instances and dns:
module.test.google_compute_instance.cluster[0]
module.test.google_compute_instance.cluster[1]
module.test.google_dns_record_set.dns[0]
module.test.google_dns_record_set.dns[1]
as well.
how to only destroy instance[1], ssd[1] and dns[1]? I feel my code may have some bug, maybe count.index has some problem which trigger some unexpected destroy?
I use: Terraform v0.12.29
I'm a bit confused as to why you want to terraform destroy what you'd normally want to do is decrement num_instances and then terraform apply.
If you do a terraform destroy the next terraform apply will put you right back to whatever you have configured in your terraform source.
It's a bit hard without more of your source to see what's going on- but setting num_instances on the module and using it in the module's resources feels wonky.
I would recommend you upgrade terraform and use count or for_each directly on the module rather than within it. (this was introduced in terraform 0.13.0) see https://www.hashicorp.com/blog/terraform-0-13-brings-powerful-meta-arguments-to-modular-workflows
Remove resource by resource:
terraform destroy -target RESOURCE_TYPE.NAME -target RESOURCE_TYPE2.NAME
resource "resource_type" "resource_name" {
...
}

Helm chart using Terraform helm provider - error executing consecutive charts

I have to install helm charts using Terraform helm provider. I tried introducing a delay after executing the first as there is a prerequisite to finish installation of the first chart and dependency before the second helm chart is installed. With the below provision script:
resource "helm_release" "istio-init" {
name = "istio-init"
repository = "${data.helm_repository.istio.metadata.0.name}"
chart = "istio-init"
version = "${var.istio_version}"
namespace = "${var.istio_namespace}"
}
resource "null_resource" "delay" {
provisioner "local-exec" {
command = "sleep 200"
}
depends_on = ["helm_release.istio-init"]
}
resource "helm_release" "istio" {
name = "istio"
repository = "${data.helm_repository.istio.metadata.0.name}"
chart = "istio"
version = "${var.istio_version}"
namespace = "${var.istio_namespace}"
}
I see the "null_resource" delay module runs when the terraform provisioning for the first time. When tried deleting the resources and reran the Terraform script I see the null_resource module never gets executed again and the provisioning errors out. Are Terraform provisioners designed to run only once?
Helm has an optional wait flag that actually will block the release until all resources are up. If you specify the wait variable on your helm_release resource, Terraform (and Helm under the hood) will wait for all resources to be up.
For example:
resource "helm_release" "istio-init" {
name = "istio-init"
repository = "${data.helm_repository.istio.metadata.0.name}"
chart = "istio-init"
version = "${var.istio_version}"
namespace = "${var.istio_namespace}"
wait = true
timeout = 200
}

Dependency on local file creation

I am setting up an EKS cluster with Terraform following the example https://github.com/terraform-aws-modules/terraform-aws-eks/blob/master/aws_auth.tf and I now have two Terraform files:
kubeconfig.tf
resource "local_file" "kubeconfig" {
content = "${data.template_file.kubeconfig.rendered}"
filename = "tmp/kubeconfig"
}
data "template_file" "kubeconfig" {
template = "${file("template/kubeconfig.tpl")}"
...
}
aws-auth.tf
resource "null_resource" "update_config_map_aws_auth" {
provisioner "local-exec" {
command = "kubectl apply -f tmp/config-map-aws-auth_${var.cluster-name}.yaml --kubeconfig /tmp/kubeconfig"
}
...
}
When I run this the local-exec command fails with
Output: error: stat tmp/kubeconfig: no such file or directory
On a second run it succeeds. I think that the file is created after local-exec tries to use it and local-exec should depend on the file resource. So I try to express the dependency by using interpolation (implicit dependency) like this:
resource "null_resource" "update_config_map_aws_auth" {
provisioner "local-exec" {
command = "kubectl apply -f tmp/config-map-aws-auth_${var.cluster-name}.yaml --kubeconfig ${resource.local_file.kubeconfig.filename}"
}
But this always gives me
Error: resource 'null_resource.update_config_map_aws_auth' provisioner
local-exec (#1): unknown resource 'resource.local_file' referenced in
variable resource.local_file.kubeconfig.filename
You don't need the resource. part when using the interpolation in the last code block.
When Terraform first started it just had resources so you don't need to say that something's a resource as that was the only case. They then added modules and data sources which required some differentiation in the naming so these get module. and data. so Terraform can tell resources and data sources etc apart.
So you probably want something like this:
resource "local_file" "kubeconfig" {
content = "${data.template_file.kubeconfig.rendered}"
filename = "tmp/kubeconfig"
}
data "template_file" "kubeconfig" {
template = "${file("template/kubeconfig.tpl")}"
...
}
resource "null_resource" "update_config_map_aws_auth" {
provisioner "local-exec" {
command = "kubectl apply -f tmp/config-map-aws-auth_${var.cluster-name}.yaml --kubeconfig ${local_file.kubeconfig.filename}"
}
}

Resources