I am setting up an EKS cluster with Terraform following the example https://github.com/terraform-aws-modules/terraform-aws-eks/blob/master/aws_auth.tf and I now have two Terraform files:
kubeconfig.tf
resource "local_file" "kubeconfig" {
content = "${data.template_file.kubeconfig.rendered}"
filename = "tmp/kubeconfig"
}
data "template_file" "kubeconfig" {
template = "${file("template/kubeconfig.tpl")}"
...
}
aws-auth.tf
resource "null_resource" "update_config_map_aws_auth" {
provisioner "local-exec" {
command = "kubectl apply -f tmp/config-map-aws-auth_${var.cluster-name}.yaml --kubeconfig /tmp/kubeconfig"
}
...
}
When I run this the local-exec command fails with
Output: error: stat tmp/kubeconfig: no such file or directory
On a second run it succeeds. I think that the file is created after local-exec tries to use it and local-exec should depend on the file resource. So I try to express the dependency by using interpolation (implicit dependency) like this:
resource "null_resource" "update_config_map_aws_auth" {
provisioner "local-exec" {
command = "kubectl apply -f tmp/config-map-aws-auth_${var.cluster-name}.yaml --kubeconfig ${resource.local_file.kubeconfig.filename}"
}
But this always gives me
Error: resource 'null_resource.update_config_map_aws_auth' provisioner
local-exec (#1): unknown resource 'resource.local_file' referenced in
variable resource.local_file.kubeconfig.filename
You don't need the resource. part when using the interpolation in the last code block.
When Terraform first started it just had resources so you don't need to say that something's a resource as that was the only case. They then added modules and data sources which required some differentiation in the naming so these get module. and data. so Terraform can tell resources and data sources etc apart.
So you probably want something like this:
resource "local_file" "kubeconfig" {
content = "${data.template_file.kubeconfig.rendered}"
filename = "tmp/kubeconfig"
}
data "template_file" "kubeconfig" {
template = "${file("template/kubeconfig.tpl")}"
...
}
resource "null_resource" "update_config_map_aws_auth" {
provisioner "local-exec" {
command = "kubectl apply -f tmp/config-map-aws-auth_${var.cluster-name}.yaml --kubeconfig ${local_file.kubeconfig.filename}"
}
}
Related
I'm using terraform 0.14 and have 2 resources, one is a local_file that creates a file on the local machine based on a variable and the other is a null_resource with a local_exec provisioner.
This all works as intended but I can only get it to either always run the provisioner (using an always-changing trigger, like timestamp()) or only run it once. Now I'd like to get it to run every time (and only when) the local_file actually changes.
Does anybody know how I can set a trigger that changes when the local_file content has changed? e.g. a last-updated-timestamp or maybe a checksum value?
resource "local_file" "foo" {
content = var.foobar
filename = "/tmp/foobar.txt"
}
resource "null_resource" "null" {
triggers = {
always_run = timestamp() # this will always run
}
provisioner "local-exec" {
command = "/tmp/somescript.py"
}
}
You can try using file hash to indicate its change:
resource "null_resource" "null" {
triggers = {
file_changed = md5(local_file.foo.content)
}
provisioner "local-exec" {
command = "/tmp/somescript.py"
}
}
I want to push the terraform state file to a github repo. The file function in Terraform fails to read .tfstate files, so I need to change their extension to .txt first. Now to automate it, I created a null resource which has a provisioner to run the command to copy the tfstate file as a txt file in the same directory. I came across this 'depends_on' argument which lets you specify if a particular resource needs to be made first before running the current. However, it is not working and I am straight away getting the error that 'terraform.txt' file doesn't exit when the file function demands it.
provider "github" {
token = "TOKEN"
owner = "USERNAME"
}
resource "null_resource" "tfstate_to_txt" {
provisioner "local-exec" {
command = "copy terraform.tfstate terraform.txt"
}
}
resource "github_repository_file" "state_push" {
repository = "TerraformStates"
file = "terraform.tfstate"
content = file("terraform.txt")
depends_on = [null_resource.tfstate_to_txt]
}
The documentation for the file function explains this behavior:
This function can be used only with files that already exist on disk at the beginning of a Terraform run. Functions do not participate in the dependency graph, so this function cannot be used with files that are generated dynamically during a Terraform operation. We do not recommend using dynamic local files in Terraform configurations, but in rare situations where this is necessary you can use the local_file data source to read files while respecting resource dependencies.
This paragraph also includes a suggestion for how to get the result you wanted: use the local_file data source, from the hashicorp/local provider, to read the file as a resource operation (during the apply phase) rather than as part of configuration loading:
resource "null_resource" "tfstate_to_txt" {
triggers = {
source_file = "terraform.tfstate"
dest_file = "terraform.txt"
}
provisioner "local-exec" {
command = "copy ${self.triggers.source_file} ${self.triggers.dest_file}"
}
}
data "local_file" "state" {
filename = null_resource.tfstate_to_txt.triggers.dest_file
}
resource "github_repository_file" "state_push" {
repository = "TerraformStates"
file = "terraform.tfstate"
content = data.local_file.state.content
}
Please note that although the above should get the order of operations you were asking about, reading the terraform.tfstate file while Terraform running is a very unusual thing to do, and is likely to result in undefined behavior because Terraform can repeatedly update that file at unpredictable moments throughout terraform apply.
If your intent is to have Terraform keep the state in a remote system rather than on local disk, the usual way to achieve that is to configure remote state, which will then cause Terraform to keep the state only remotely, and not use the local terraform.tfstate file at all.
depends_on does not really work with null_resource.provisioner.
here's a workaround that can help you :
resource "null_resource" "tfstate_to_txt" {
provisioner "local-exec" {
command = "copy terraform.tfstate terraform.txt"
}
}
resource "null_resource" "delay" {
provisioner "local-exec" {
command = "sleep 20"
}
triggers = {
"before" = null_resource.tfstate_to_txt.id
}
}
resource "github_repository_file" "state_push" {
repository = "TerraformStates"
file = "terraform.tfstate"
content = file("terraform.txt")
depends_on = ["null_resource.delay"]
}
the delay null resource will make sure the resource 2 runs after the first if the copy command takes more time just change the sleep to higher number
I have a use case where I am taking all variables from locals in terraform as shown below, but before that, I want to run a null_resource block which will run a python script and update all the data into the local's file.
So my use case in simple words is to execute a null_resource block at the start of the terraform script and then run all the other resource blocks
My current code sample is as follows:
// executing script for populating data in app_config.json
resource "null_resource" "populate_data" {
provisioner "local-exec" {
command = "python3 scripts/data_populate.py"
}
}
// reading data variables from app_config.json file
locals {
config_data = jsondecode(file("${path.module}/app_config.json"))
}
How do I achieve that? All I have tried is adding a triggers command inside locals as follows but even that did not work.
locals {
triggers = {
order = null_resource.populate_data.id
}
config_data = jsondecode(file("${path.module}/app_config.json"))
}
You can use depends_on
resource "null_resource" "populate_data" {
provisioner "local-exec" {
command = "python3 scripts/data_populate.py"
}
}
// reading data variables from app_config.json file
locals {
depends_on = [null_resource.populate_data]
config_data = jsondecode(file("${path.module}/app_config.json"))
}
Now locals will get executed after populate_data always.
I have included a Terraform module i.e. "null resource" which runs a command to "sleep 200" dependent on the previous module finishing execution. For some reason I don't see provisioner module when I run Terraform plan. What could be the reason for that ? Below is the main.tf terraform file:
resource "helm_release" "istio-init" {
name = "istio-init"
repository = "${data.helm_repository.istio.metadata.0.name}"
chart = "istio-init"
version = "${var.istio_version}"
namespace = "${var.istio_namespace}"
}
resource "null_resource" "delay" {
provisioner "local-exec" {
command = "sleep 200"
}
depends_on = ["helm_release.istio-init"]
}
resource "helm_release" "istio" {
name = "istio"
repository = "${data.helm_repository.istio.metadata.0.name}"
chart = "istio"
version = "${var.istio_version}"
namespace = "${var.istio_namespace}"
}
Provisioners are a bit different than resources in terraform. They are something that is either triggered on creation of a resource or destruction. No information about them is stored in the state and that is why adding/modifying/removing a provisioner on an already created resource will have no effect on your plan or resource. The plan is a detailed output to how the state will change. They are only for time of creation/destruction. When you run your apply you will still observe your sleep in action because your null_resource will be created. I would reference the terraform docs on this for more details.
Provisioners
I have a Terraform script which creates a config.json file and then runs a command that uses that config.json:
resource "local_file" "config" {
# Output vars to config
filename = "config.json"
content = "..."
# Deploy using config
provisioner "local-exec" {
command = "deploy"
}
}
This all works great, but when I run terraform destroy I'd like to run a different command - I tried to do this with a destroy-time provisioner in a null_resource by adding the following:
resource "null_resource" "test" {
provisioner "local-exec" {
when = "destroy"
command = "delete"
}
}
The script is run, but it runs after the config file is deleted - it errors, because it needs that config file to exist for it to know what to delete.
How would I fix this?
Thanks!
I moved the destroy time provisioner into the original resource, and it worked great:
resource "local_file" "config" {
# Output vars to config
filename = "config.json"
content = "..."
# Deploy using config
provisioner "local-exec" {
command = "deploy"
}
# Delete on_destroy
provisioner "local-exec" {
when = "destroy"
command = "delete"
}
}