I have to install helm charts using Terraform helm provider. I tried introducing a delay after executing the first as there is a prerequisite to finish installation of the first chart and dependency before the second helm chart is installed. With the below provision script:
resource "helm_release" "istio-init" {
name = "istio-init"
repository = "${data.helm_repository.istio.metadata.0.name}"
chart = "istio-init"
version = "${var.istio_version}"
namespace = "${var.istio_namespace}"
}
resource "null_resource" "delay" {
provisioner "local-exec" {
command = "sleep 200"
}
depends_on = ["helm_release.istio-init"]
}
resource "helm_release" "istio" {
name = "istio"
repository = "${data.helm_repository.istio.metadata.0.name}"
chart = "istio"
version = "${var.istio_version}"
namespace = "${var.istio_namespace}"
}
I see the "null_resource" delay module runs when the terraform provisioning for the first time. When tried deleting the resources and reran the Terraform script I see the null_resource module never gets executed again and the provisioning errors out. Are Terraform provisioners designed to run only once?
Helm has an optional wait flag that actually will block the release until all resources are up. If you specify the wait variable on your helm_release resource, Terraform (and Helm under the hood) will wait for all resources to be up.
For example:
resource "helm_release" "istio-init" {
name = "istio-init"
repository = "${data.helm_repository.istio.metadata.0.name}"
chart = "istio-init"
version = "${var.istio_version}"
namespace = "${var.istio_namespace}"
wait = true
timeout = 200
}
Related
I am trying to write a few checks to the helm_release resource. I want to use for_each to check the status of helm releases in the cluster. To start with I have a setup as below to deploy the current charts. I have a locals section where I mention the chart name and version
locals {
chart_versions = {
redis = "6.0.1"
nginx = "1.2.1"
vault = "1.0.0"
}
}
Then I refer to the chart version in the helm_release as below
resource "helm_release" "redis" {
name = "redis"
chart = "bitnami/redis"
version = local.chart_versions.redis
}
resource "helm_release" "nginx" {
name = "nginx"
chart = "nginx"
version = local.chart_versions.nginx
}
resource "helm_release" "vault" {
name = "vault"
chart = "valult"
version = local.chart_versions.vault
}
Observe that my resource name always matches with the chart names in locals. Now I am trying to loop over the locals and try to fetch the release status
resource "null_resource" "helm_release_status" {
for_each = local.chart_versions
provisioner "local-exec" {
command = <<EOF
echo 'The name of the chart ${each.key} has been ${helm_release.${each.key}.status}' >> Infra-Smoke-Tests.txt
EOF
}
depends_on = [
helm_release.${each.key}
]
}
Since I named all my helm_resources with the key values from the locals, I wanted to get something like this.
The name of the chart ${each.key} -->(redis) has been ${helm_release.${each.key}.status}
--> should get the status of the redis release and the loop has to parse all the charts.
I am able to get the chart values with ${each.key} but I am not able to use this to get the status attribute of the helm_release. Does terraform support this? I tried the join function to concatenate the strings but was not successful.
I am not able to use this to get the status attribute of the helm_releas
There reason is that something like ${helm_release.${each.key}.status} is not supported in TF. You have to re-architect your TF so that you never have to do such a thing.
The easiest would be through for_each:
locals {
releases = {
redis = {
chart = "bitnami/redis"
version = local.chart_versions.redis
},
nginx = {
chart = "nginx"
version = local.chart_versions.nginx
},
vault = {
chart = "valult"
version = local.chart_versions.vault
}
}
}
resource "helm_release" "release" {
for_each = local.releases
name = each.key
chart = each.key.chart
version = each.key.version
}
provisioner "local-exec" {
command = <<EOF
echo 'The name of the chart ${each.key} has been ${helm_release.release[each.key].status}' >> Infra-Smoke-Tests.txt
EOF
}
I have a local_file resource in my terraform configuration,
the problem is how to tell terraform that i want this resource to be recreated every time my client will run 'terraform apply' even if nothing changed in the resource itself, how can i make this possible?
local_file resource cant do what i want
I am using triggers to do so in a null_resource but there is no such option in local_file resource.
null_resource does what i want
Instead of local provider try template provider and create the file using null resource.So that trigger will take care of recreating the file.Tested like below
template = file("${path.module}/templates/inventory.tpl")
vars = {
bastion_host = local.Mongo_Bastion_host
key_file = var.private_key_file_path
}
}
resource "null_resource" "copy-inventory" {
triggers = {
ip = local.random_id
}
provisioner "local-exec" {
command = "echo ${data.template_file.inventory_cfg.rendered} >>inventory"
}
}
After terraform apply i reran terraform plan and able to see it is creating thefile again
Terraform will perform the following actions:
# null_resource.copy-inventory must be replaced
-/+ resource "null_resource" "copy-inventory" {
~ id = "7011189362963809116" -> (known after apply)
~ triggers = {
- "ip" = "f9f0f771-42ae-176e-3722-5b342665dea2"
} -> (known after apply) # forces replacement
}
Plan: 1 to add, 0 to change, 1 to destroy.
I have created a few instances using Terraform module:
resource "google_compute_instance" "cluster" {
count = var.num_instances
name = "redis-${format("%03d", count.index)}"
...
attached_disk {
source =
google_compute_disk.ssd[count.index].name
}
}
resource "google_compute_disk" "ssd" {
count = var.num_instances
name = "redis-ssd-${format("%03d", count.index)}"
...
zone = data.google_compute_zones.available.names[count.index % length(data.google_compute_zones.available.names)]
}
resource "google_dns_record_set" "dns" {
count = var.num_instances
name = "${var.dns_name}-${format("%03d",
count.index +)}.something.com"
...
managed_zone = XXX
rrdatas = [google_compute_instance.cluster[count.index].network_interface.0.network_ip]
}
module "test" {
source = "/modules_folder"
num_instances = 2
...
}
How can I destroy one of the instances and its dependency, say instance[1]+ssd[1]+dns[1]? I tried to destroy only one module using
terraform destroy -target module.test.google_compute_instance.cluster[1]
but it does not destroy ssd[1] and it tried to destroy both dns:
module.test.google_dns_record_set.dns[0]
module.test.google_dns_record_set.dns[1]
if I run
terraform destroy -target module.test.google_compute_disk.ssd[1]
it tried to destroy both instances and dns:
module.test.google_compute_instance.cluster[0]
module.test.google_compute_instance.cluster[1]
module.test.google_dns_record_set.dns[0]
module.test.google_dns_record_set.dns[1]
as well.
how to only destroy instance[1], ssd[1] and dns[1]? I feel my code may have some bug, maybe count.index has some problem which trigger some unexpected destroy?
I use: Terraform v0.12.29
I'm a bit confused as to why you want to terraform destroy what you'd normally want to do is decrement num_instances and then terraform apply.
If you do a terraform destroy the next terraform apply will put you right back to whatever you have configured in your terraform source.
It's a bit hard without more of your source to see what's going on- but setting num_instances on the module and using it in the module's resources feels wonky.
I would recommend you upgrade terraform and use count or for_each directly on the module rather than within it. (this was introduced in terraform 0.13.0) see https://www.hashicorp.com/blog/terraform-0-13-brings-powerful-meta-arguments-to-modular-workflows
Remove resource by resource:
terraform destroy -target RESOURCE_TYPE.NAME -target RESOURCE_TYPE2.NAME
resource "resource_type" "resource_name" {
...
}
In my main.tf I have this, that I run via terraform 0.12.24 on ubuntu:
module "eks_cluster" {
source = "git::https://github.com/cloudposse/terraform-aws-eks-cluster.git?ref=tags/0.20.0"
namespace = null
stage = null
name = var.stack_name
attributes = []
tags = var.tags
region = var.region
vpc_id = module.vpc.vpc_id
subnet_ids = module.subnets.public_subnet_ids
kubernetes_version = var.kubernetes_version
oidc_provider_enabled = var.oidc_provider_enabled
workers_role_arns = [
module.eks_node_group.eks_node_group_role_arn,
# module.eks_fargate_profile_fg.eks_fargate_profile_role_arn,
]
workers_security_group_ids = []
}
...
resource "local_file" "k8s_service_account_pods_default" {
filename = "${path.root}/kubernetes-default.yaml"
content = <<SERVICE_ACCOUNT
apiVersion: v1
kind: ServiceAccount
metadata:
name: aws-for-pods
namespace: default
annotations:
eks.amazonaws.com/role-arn: ${var.pod_role_arn}
SERVICE_ACCOUNT
provisioner "local-exec" {
command = "kubectl apply -f ${local_file.k8s_service_account_pods_default.filename}"
}
}
This works well most of the time; sometimes, I get this error:
Error: Error running command 'kubectl apply -f ./kubernetes-default.yaml':
exit status 1. Output: error: unable to recognize "./kubernetes-default.yaml":
Get https://<redacted>.us-east-2.eks.amazonaws.com/api?timeout=32s: dial tcp:
lookup <redacted>.us-east-2.eks.amazonaws.com on 192.168.2.1:53: no such host
If I run terraform apply even immediately after, that time the kubectl apply works. I'm guessing there's about 30 sec - 1 min delay between the two kubectl apply's, so probably the api server just wasn't really ready yet.
Looks like there is time_sleep resource, but that seems hackish. Doesn't seem like I can mark the local_file with depends-on on a resource inside a module either (seems like terraform is working on this).
Any suggestions, is time_sleep my only option?
I have included a Terraform module i.e. "null resource" which runs a command to "sleep 200" dependent on the previous module finishing execution. For some reason I don't see provisioner module when I run Terraform plan. What could be the reason for that ? Below is the main.tf terraform file:
resource "helm_release" "istio-init" {
name = "istio-init"
repository = "${data.helm_repository.istio.metadata.0.name}"
chart = "istio-init"
version = "${var.istio_version}"
namespace = "${var.istio_namespace}"
}
resource "null_resource" "delay" {
provisioner "local-exec" {
command = "sleep 200"
}
depends_on = ["helm_release.istio-init"]
}
resource "helm_release" "istio" {
name = "istio"
repository = "${data.helm_repository.istio.metadata.0.name}"
chart = "istio"
version = "${var.istio_version}"
namespace = "${var.istio_namespace}"
}
Provisioners are a bit different than resources in terraform. They are something that is either triggered on creation of a resource or destruction. No information about them is stored in the state and that is why adding/modifying/removing a provisioner on an already created resource will have no effect on your plan or resource. The plan is a detailed output to how the state will change. They are only for time of creation/destruction. When you run your apply you will still observe your sleep in action because your null_resource will be created. I would reference the terraform docs on this for more details.
Provisioners