I have a Terraform module, which we'll call parent and a child module used inside of it that we'll refer to as child. The goal is to have the child module run the provisioner before the kubernetes_deployment resource is created. Basically, the child module builds and pushes a Docker image. If the image is not already present, the kubernetes_deployment will wait and eventually timeout because there's no image for the Deployment to use for creation of pods. I've tried everything I've been able to find online, output variables in the child module, using depends_on in the kubernetes_deployment resource, etc and have hit a wall. I would greatly appreciate any help!
parent.tf
module "child" {
source = ".\\child-module-path"
...
}
resource "kubernetes_deployment" "kub_deployment" {
...
}
child-module-path\child.tf
data "external" "hash_folder" {
program = ["powershell.exe", "${path.module}\\bin\\hash_folder.ps1"]
}
resource "null_resource" "build" {
triggers = {
md5 = data.external.hash_folder.result.md5
}
provisioner "local-exec" {
command = "${path.module}\\bin\\build.ps1 ${var.argument_example}"
interpreter = ["powershell.exe"]
}
}
Example Terraform error output:
module.parent.kubernetes_deployment.kub_deployment: Still creating... [10m0s elapsed]
Error output:
Error: Waiting for rollout to finish: 0 of 1 updated replicas are available...
In your child module, declare an output value that depends on the null resource that has the provisioner associated with it:
output "build_complete" {
# The actual value here doesn't really matter,
# as long as this output refers to the null_resource.
value = null_resource.build.triggers.md5
}
Then in your "parent" module, you can either make use of module.child.build_complete in an expression (if including the MD5 string in the deployment somewhere is useful), or you can just declare that the resource depends on the output.
resource "kubernetes_deployment" "example" {
depends_on = [module.child.build_complete]
...
}
Because the output depends on the null_resource and the kubernetes_deployment depends on the output, transitively the kubernetes_deployment now effectively depends on the null_resource, creating the ordering you wanted.
Related
I want to push the terraform state file to a github repo. The file function in Terraform fails to read .tfstate files, so I need to change their extension to .txt first. Now to automate it, I created a null resource which has a provisioner to run the command to copy the tfstate file as a txt file in the same directory. I came across this 'depends_on' argument which lets you specify if a particular resource needs to be made first before running the current. However, it is not working and I am straight away getting the error that 'terraform.txt' file doesn't exit when the file function demands it.
provider "github" {
token = "TOKEN"
owner = "USERNAME"
}
resource "null_resource" "tfstate_to_txt" {
provisioner "local-exec" {
command = "copy terraform.tfstate terraform.txt"
}
}
resource "github_repository_file" "state_push" {
repository = "TerraformStates"
file = "terraform.tfstate"
content = file("terraform.txt")
depends_on = [null_resource.tfstate_to_txt]
}
The documentation for the file function explains this behavior:
This function can be used only with files that already exist on disk at the beginning of a Terraform run. Functions do not participate in the dependency graph, so this function cannot be used with files that are generated dynamically during a Terraform operation. We do not recommend using dynamic local files in Terraform configurations, but in rare situations where this is necessary you can use the local_file data source to read files while respecting resource dependencies.
This paragraph also includes a suggestion for how to get the result you wanted: use the local_file data source, from the hashicorp/local provider, to read the file as a resource operation (during the apply phase) rather than as part of configuration loading:
resource "null_resource" "tfstate_to_txt" {
triggers = {
source_file = "terraform.tfstate"
dest_file = "terraform.txt"
}
provisioner "local-exec" {
command = "copy ${self.triggers.source_file} ${self.triggers.dest_file}"
}
}
data "local_file" "state" {
filename = null_resource.tfstate_to_txt.triggers.dest_file
}
resource "github_repository_file" "state_push" {
repository = "TerraformStates"
file = "terraform.tfstate"
content = data.local_file.state.content
}
Please note that although the above should get the order of operations you were asking about, reading the terraform.tfstate file while Terraform running is a very unusual thing to do, and is likely to result in undefined behavior because Terraform can repeatedly update that file at unpredictable moments throughout terraform apply.
If your intent is to have Terraform keep the state in a remote system rather than on local disk, the usual way to achieve that is to configure remote state, which will then cause Terraform to keep the state only remotely, and not use the local terraform.tfstate file at all.
depends_on does not really work with null_resource.provisioner.
here's a workaround that can help you :
resource "null_resource" "tfstate_to_txt" {
provisioner "local-exec" {
command = "copy terraform.tfstate terraform.txt"
}
}
resource "null_resource" "delay" {
provisioner "local-exec" {
command = "sleep 20"
}
triggers = {
"before" = null_resource.tfstate_to_txt.id
}
}
resource "github_repository_file" "state_push" {
repository = "TerraformStates"
file = "terraform.tfstate"
content = file("terraform.txt")
depends_on = ["null_resource.delay"]
}
the delay null resource will make sure the resource 2 runs after the first if the copy command takes more time just change the sleep to higher number
I have created a few instances using Terraform module:
resource "google_compute_instance" "cluster" {
count = var.num_instances
name = "redis-${format("%03d", count.index)}"
...
attached_disk {
source =
google_compute_disk.ssd[count.index].name
}
}
resource "google_compute_disk" "ssd" {
count = var.num_instances
name = "redis-ssd-${format("%03d", count.index)}"
...
zone = data.google_compute_zones.available.names[count.index % length(data.google_compute_zones.available.names)]
}
resource "google_dns_record_set" "dns" {
count = var.num_instances
name = "${var.dns_name}-${format("%03d",
count.index +)}.something.com"
...
managed_zone = XXX
rrdatas = [google_compute_instance.cluster[count.index].network_interface.0.network_ip]
}
module "test" {
source = "/modules_folder"
num_instances = 2
...
}
How can I destroy one of the instances and its dependency, say instance[1]+ssd[1]+dns[1]? I tried to destroy only one module using
terraform destroy -target module.test.google_compute_instance.cluster[1]
but it does not destroy ssd[1] and it tried to destroy both dns:
module.test.google_dns_record_set.dns[0]
module.test.google_dns_record_set.dns[1]
if I run
terraform destroy -target module.test.google_compute_disk.ssd[1]
it tried to destroy both instances and dns:
module.test.google_compute_instance.cluster[0]
module.test.google_compute_instance.cluster[1]
module.test.google_dns_record_set.dns[0]
module.test.google_dns_record_set.dns[1]
as well.
how to only destroy instance[1], ssd[1] and dns[1]? I feel my code may have some bug, maybe count.index has some problem which trigger some unexpected destroy?
I use: Terraform v0.12.29
I'm a bit confused as to why you want to terraform destroy what you'd normally want to do is decrement num_instances and then terraform apply.
If you do a terraform destroy the next terraform apply will put you right back to whatever you have configured in your terraform source.
It's a bit hard without more of your source to see what's going on- but setting num_instances on the module and using it in the module's resources feels wonky.
I would recommend you upgrade terraform and use count or for_each directly on the module rather than within it. (this was introduced in terraform 0.13.0) see https://www.hashicorp.com/blog/terraform-0-13-brings-powerful-meta-arguments-to-modular-workflows
Remove resource by resource:
terraform destroy -target RESOURCE_TYPE.NAME -target RESOURCE_TYPE2.NAME
resource "resource_type" "resource_name" {
...
}
I am creating a webapp and a function. The web app calls the function.
my terraform structure is like this
main.tf
variable.tf
module/webapp
module/function
in the main.tf I call module/function to create the function and then I call module/webapp to create the webapp.
I need to provide the function key in the configuration for webpp.
Terraform azurerm provider 2.27.0 has added function keys as a data source.
https://github.com/terraform-providers/terraform-provider-azurerm/pull/7902
This is how it is described in terraform documentation.
https://www.terraform.io/docs/providers/azurerm/d/function_app_host_keys.html
data "azurerm_function_app_host_keys" "example" {
name = "example-function"
resource_group_name = azurerm_resource_group.example.name
}
How exactly do I return these keys to the main module? I tried the following but it returns the error that follows the code:
resource "azurerm_function_app" "myfunc" {
name = var.function_app
location = var.region
...
tags = var.tags
}
output "hostname" {
value = azurerm_function_app.actico.default_hostname
}
output "functionkeys" {
value = azurerm_function_app.actico.azurerm_function_app_host_keys
}
Error: unsupported attribute
This object has no argument, nested block, or exported attribute named
"azurerm_function_app_host_keys".
Another attempt appears more promising. In the main module added a data element, expecting that it will execute after the function has been created and fetch the key. But getting 400 Error.
in main module
data "azurerm_function_app_host_keys" "keymap" {
name = var.function_app_name
resource_group_name = var.resource_group_name
depends_on = [module.function_app]
}
Error making Read request on AzureRM Function App Hostkeys "FunctionApp": web.AppsClient#ListHostKeys: Failure responding to request:
StatusCode=400 -- Original Error: autorest/azure: Service returned an error. Status=400 Code="BadRequest" Message="Encountered an error (ServiceUnavailable) from host runtime." Details=[{"Message":"Encountered an error (ServiceUnavailable) from host runtime."},{"Code":"BadRequest"},{"ErrorEntity":{"Code":"BadRequest","Message":"Encountered an error
(ServiceUnavailable) from host runtime."}}]
Thanks,
Tauqir
I did some testing around this and there's two things. It looks like you need to deploy a function or restart the function app for it to generate the keys. If your deploying the function and then try to get the keys, it doesn't seem to wait. There's a delay between the function starting and the keys becoming available. Also there's issues with terraform around this. I had the issue with V12 see #26074.
I've gone back to using a module I wrote (bottom link), this waits for the key to become available.
https://github.com/hashicorp/terraform/issues/26074
https://github.com/eltimmo/terraform-azure-function-app-get-keys
What you're doing is correct from what I can gather, you will need to pass the values into the webapp module in your main.tf like so:
module webapp {
...
func_hostname = module.function.hostname
functionkeys = module.function.functionkeys
}
and have the variables set up in your webapp module
variable func_hostname {
type = string
}
variable functionkeys {
type = string
}
What I can see is that you're trying to return the azurerm_function_app_host_keys from the azurerm_function_app which does not exist.
Try returning the keys from the data source.
I was able to achieve this by using the lifecycle events of resources that are being created
depends_on
essentially letting terraform know that first create the resource group and the azure function before trying to pull the keys
resource "azurerm_resource_group" "rg" {
...
}
resource "azurerm_function_app" "app" {
...
}
# wait for the azure resource group and azure function are created.
# i believe that you can wait just for the function and this will work too
data "azurerm_function_app_host_keys" "hostkey" {
depends_on = [
azurerm_resource_group.rg,
azurerm_function_app.app
]
...
}
I have included a Terraform module i.e. "null resource" which runs a command to "sleep 200" dependent on the previous module finishing execution. For some reason I don't see provisioner module when I run Terraform plan. What could be the reason for that ? Below is the main.tf terraform file:
resource "helm_release" "istio-init" {
name = "istio-init"
repository = "${data.helm_repository.istio.metadata.0.name}"
chart = "istio-init"
version = "${var.istio_version}"
namespace = "${var.istio_namespace}"
}
resource "null_resource" "delay" {
provisioner "local-exec" {
command = "sleep 200"
}
depends_on = ["helm_release.istio-init"]
}
resource "helm_release" "istio" {
name = "istio"
repository = "${data.helm_repository.istio.metadata.0.name}"
chart = "istio"
version = "${var.istio_version}"
namespace = "${var.istio_namespace}"
}
Provisioners are a bit different than resources in terraform. They are something that is either triggered on creation of a resource or destruction. No information about them is stored in the state and that is why adding/modifying/removing a provisioner on an already created resource will have no effect on your plan or resource. The plan is a detailed output to how the state will change. They are only for time of creation/destruction. When you run your apply you will still observe your sleep in action because your null_resource will be created. I would reference the terraform docs on this for more details.
Provisioners
Does it make sense to understand that it runs in the order defined in main.tf of terraform?
I understand that it is necessary to describe the trigger option in order to define the order on terraform.
but if it could not be used trigger option like this data "external" , How can I define the execution order?
For example, I would like to run in order as follows.
get_my_public_ip -> ec2 -> db -> test_http_status
main.tf is as follows
data "external" "get_my_public_ip" {
program = ["sh", "scripts/get_my_public_ip.sh"]
}
module "ec2" {
...
}
module "db" {
...
}
data "external" "test_http_status" {
program = ["sh", "scripts/test_http_status.sh"]
}
I can only provide feedback on the code you provided but one way to ensure the test_status command is run once the DB is ready is to use a depends_on within a null_resource
resource "null_resource" "test_status" {
depends_on = ["module.db.id"] #or any output variable
provisioner "local-exec" {
command = "scripts/test_http_status.sh"
}
}
But as #JimB already mentioned terraform isn't procedural so ensuring order isn't possible.