Terraform null_resource trigger when file missing - terraform

I am using null_resource provisioner to copy source files into build folder and run package installation. Then I use archive_file to create a zip that is used for a serverless function deployment. The current trigger is a output_sha of a zip file I create only for the purpose of detecting a change to the source. All works well except when the copy of source gets deleted from the build folder. At that point Terraform assumes that the source has not changed, the null_resource is not replaced and therefore no files end up in the build folder.
Is there a way to to trigger the null_resource when the file is missing as well when the source changes?
Terraform version 1.2.8
Here is a sample of what is happening:
data "archive_file" "change_check" {
type = "zip"
source_dir = local.src_dir
output_path = "${local.change_check_path}/${var.lambda_name}.zip"
}
resource "null_resource" "dependencies" {
triggers = {
src_hash = data.archive_file.change_check.output_sha
}
provisioner "local-exec" {
command = <<EOT
rm -rf ${local.build_dir};
cp -R ${local.src_dir} ${local.build_dir};
EOT
}
provisioner "local-exec" {
command = "pip install -r ${local.build_dir}/requirements.txt -t ${local.build_dir}/python --upgrade"
}
}

Related

Avoid Racing Condition between resources in a Terraform Module

I have these two resource definitions in a terraform module:
resource "null_resource" "install_dependencies" {
provisioner "local-exec" {
command = "pip install -r requirements.txt -t . ; exit"
}
triggers = {
dependencies_versions = filemd5(".${var.package_requirements_path}")
}
}
data "archive_file" "lambda_source_package" {
type = "zip"
source_dir = ".${var.source_directory}"
output_path = var.zipped_lambda
}
The null resource installs a host of dependencies. These resources are then archived by the lambda_source_package. However, I find that, data "archive_file" "lambda_source_package" executes before all the pip packages are installed.
To solve this, I used the depends_on meta-argument for this. I realized this also doesn't solve the issue since terraform isn't able to find the zip file generated by data "archive_file" "lambda_source_package" before it completes execution. The output error message is │ Error: unable to load "../outputs/xx.zip": open ../outputs/xx.zip: no such file or directory
The documentation says that, the depends_on argument "create more conservative plans that replace more resources than necessary" and hence it is not advisable to use it especially when creating a module(which is my use case).
The documentation also suggests that "Instead of depends_on, we recommend using expression references to imply dependencies when possible.". What does this mean? How can I apply it to my use-case?

delete text file during terraform destroy

I am trying to delete generated Ansible inventory hosts a file from my local machine when executing terraform destroy.
When I run terraform apply I use provisioner "local-exec" to create hosts file which is being used later by ansible playbook that is called during the deployment.
provisioner "local-exec" {
command = "echo master ansible_host=${element((aws_instance.kubeadm-node.*.public_ip),0)} >> hosts"
}
Is it possible to make sure that the hosts file is deleted when I am deleting all the resources with terraform destroy?
What is the easiest approach to delete hosts file when executing terraform destroy?
Thanks for your help, please let me know if my explanation was not clear enough.
I would suggest using the local_file resource to handle the inventory file. This way we can easily manage the file as expected when apply or destroy is run.
Example:
resource "local_file" "ansible_inventory" {
filename = "./hosts"
file_permission = "0664"
directory_permission = "0755"
content = <<-EOT
master ansible_host=${element((aws_instance.kubeadm-node.*.public_ip),0)}
EOT
}
You could add another local-exec provisioner and set it to be used only when terraform destroy is run, e.g.:
provisioner "local-exec" {
command = "rm -f /path/to/file"
when = destroy
}
More information about using destroy time provisioners here [1].
[1] https://www.terraform.io/language/resources/provisioners/syntax#destroy-time-provisioners

terraform-provider-local on Windows and Linux (Terraform Cloud Worker VMs)

On my Windows machine I have istioctl.exe on my PATH.
When I run this local-exec it works.
provisioner "local-exec" {
interpreter = ["bash", "-c"]
working_dir = "${path.module}/tmp"
command = <<EOH
istioctl version --remote=false;
EOH
}
For Terraform Cloud I first download istioctl and place it in ${path.module}/tmp.
But I need to change the local-exec above to ./istioctl version --remote=false;.
For TFC is there a way to add istioctl to the PATH so I do not have to use the ./?

running shell script with terraform provisioner "local-exec" returns permissions denied in Azure DevOps returns permissions denied

I'm trying to provision a databricks with a pat token with a null_resource and local-exec.
this is the code block:
resource "null_resource" "databricks_token" {
triggers = {
workspace = azurerm_databricks_workspace.databricks.id
key_vault_access = azurerm_key_vault_access_policy.terraform.id
}
provisioner "local-exec" {
command = "${path.cwd}/generate-pat-token.sh"
environment = {
RESOURCE_GROUP = var.resource_group_name
DATABRICKS_WORKSPACE_RESOURCE_ID = azurerm_databricks_workspace.databricks.id
KEY_VAULT = azurerm_key_vault.databricks_token.name
SECRET_NAME = "DATABRICKS-TOKEN"
DATABRICKS_ENDPOINT = "https://westeurope.azuredatabricks.net"
}
}
}
however, I get the following error:
2020-02-26T19:41:51.9455473Z [0m[1mnull_resource.databricks_token: Provisioning with 'local-exec'...[0m[0m
2020-02-26T19:41:51.9458257Z [0m[0mnull_resource.databricks_token (local-exec): Executing: ["/bin/sh" "-c" "/home/vsts/work/r1/a/_Infrastructure/Infrastructure/ei-project/devtest/generate-pat-token.sh"]
2020-02-26T19:41:51.9480441Z [0m[0mnull_resource.databricks_token (local-exec): /bin/sh: 1: /home/vsts/work/r1/a/_Infrastructure/Infrastructure/ei-project/devtest/generate-pat-token.sh: Permission denied
2020-02-26T19:41:51.9481502Z [0m[0m
2020-02-26T19:41:52.0386092Z [31m
2020-02-26T19:41:52.0399075Z [1m[31mError: [0m[0m[1mError running command '/home/vsts/work/r1/a/_Infrastructure/Infrastructure/ei-project/devtest/generate-pat-token.sh': exit status 126. Output: /bin/sh: 1: /home/vsts/work/r1/a/_Infrastructure/Infrastructure/ei-project/devtest/generate-pat-token.sh: Permission denied
2020-02-26T19:41:52.0401076Z [0m
2020-02-26T19:41:52.0401373Z
2020-02-26T19:41:52.0401978Z [0m[0m[0m
side note, this is with Azure DevOps
Any idea how to solve the permission denied ?
The root of this problem is with how Azure DevOps stores artifacts and repositories. Here is a snippet from their documentation explaining why this happens.
https://learn.microsoft.com/en-us/azure/devops/pipelines/artifacts/build-artifacts?view=azure-devops&tabs=yaml#download-to-debug
Under the TIPS, section you will see the following:
Build artifacts are stored on a Windows filesystem, which causes all UNIX permissions to be lost, including the execution bit. You might need to restore the correct UNIX permissions after downloading your artifacts from Azure Pipelines or TFS.
This means that your files downloaded (in this case your shell script) have all unix permissions wiped. To fix this problem, I add a step to first set the proper permissions on the shell script before executing the shell script. See the below example where I have added the fix to the code you provided.
resource "null_resource" "databricks_token" {
triggers = {
workspace = azurerm_databricks_workspace.databricks.id
key_vault_access = azurerm_key_vault_access_policy.terraform.id
}
provisioner "local-exec" {
command = "chmod +x ${path.cwd}/generate-pat-token.sh; ${path.cwd}/generate-pat-token.sh"
environment = {
RESOURCE_GROUP = var.resource_group_name
DATABRICKS_WORKSPACE_RESOURCE_ID = azurerm_databricks_workspace.databricks.id
KEY_VAULT = azurerm_key_vault.databricks_token.name
SECRET_NAME = "DATABRICKS-TOKEN"
DATABRICKS_ENDPOINT = "https://westeurope.azuredatabricks.net"
}
}
}
The command section will first set the execute permissions on the shell script and then execute it.

Relative path in local-exec

I'm trying to reference a local script inside a local-exec provisioner. The script is located several levels above the module directory. Using ${path.module}/../../scripts/somescript.ps1 generates a path not found error.
Moving the scripts directory under the modules directory solves the problem but unfortunately is not a valid option in my case. Working scenario: ${path.module}/scripts/somescript.ps1
I didn't see anywhere that it's a TF limitation or a bug so, any help is highly appreciated.
Thank you in advance.
This is my local-exec block:
provisioner "local-exec" {
interpreter = ["pwsh", "-Command"]
command = "${path.module}/scripts/Generate-SQLInfo.ps1 -user ${var.az_sql_server_admin_login} -dbname ${var.az_sql_db_name} -resourceGroupName ${module.resource_group.az_resource_group_name} -sqlServerName ${module.sql_server.sql_server_name} -vaultName ${module.keyvault.az_keyvault_name} -azSubscriptionID ${var.az_subscription_id}"
}
Try using working_dir
https://www.terraform.io/docs/provisioners/local-exec.html
provisioner "local-exec" {
working_dir = "${path.module}/../scripts/" # assuming it's this directory
interpreter = ["pwsh", "-Command"]
command = "Generate-SQLInfo.ps1 ..."
}
I don't have resources right now to test this but probably this should work for you.

Resources