Relative path in local-exec - terraform

I'm trying to reference a local script inside a local-exec provisioner. The script is located several levels above the module directory. Using ${path.module}/../../scripts/somescript.ps1 generates a path not found error.
Moving the scripts directory under the modules directory solves the problem but unfortunately is not a valid option in my case. Working scenario: ${path.module}/scripts/somescript.ps1
I didn't see anywhere that it's a TF limitation or a bug so, any help is highly appreciated.
Thank you in advance.
This is my local-exec block:
provisioner "local-exec" {
interpreter = ["pwsh", "-Command"]
command = "${path.module}/scripts/Generate-SQLInfo.ps1 -user ${var.az_sql_server_admin_login} -dbname ${var.az_sql_db_name} -resourceGroupName ${module.resource_group.az_resource_group_name} -sqlServerName ${module.sql_server.sql_server_name} -vaultName ${module.keyvault.az_keyvault_name} -azSubscriptionID ${var.az_subscription_id}"
}

Try using working_dir
https://www.terraform.io/docs/provisioners/local-exec.html
provisioner "local-exec" {
working_dir = "${path.module}/../scripts/" # assuming it's this directory
interpreter = ["pwsh", "-Command"]
command = "Generate-SQLInfo.ps1 ..."
}
I don't have resources right now to test this but probably this should work for you.

Related

terraform - failing to set environment variable using local-exec

As part of terraform run I'm trying to set environment variables on my Linux server using "local-exec" and command (I need to use it with a different application)
resource "null_resource" "set_env3" {
provisioner "local-exec" {
command = "export BASTION_SERVER_PUBLIC_IP=2.2.2.2"
}
}
But when running "echo $BASTION_SERVER_PUBLIC_IP" on my Linux server i'm getting an empty output and I also can't locate BASTION_SERVER_PUBLIC_IP parameter when running "printenv"
BTW - I have tried to run the following - but again i can not find the parameter
resource "null_resource" "update34" {
provisioner "local-exec" {
command = "env"
environment = {
BASTION = "5.5.5.5"
}
}
}
An export like that just exports the environment variable so it is available to other child processes spawned by the terraform process. It doesn't export it to the parent, Unix shell process. In short: this is never going to work. You can't use Terraform null_resource to set your local computer's environment variables.
What you need to do is define the value as a Terraform output. Then after you run terraform apply you do something like the following:
export BASTION_SERVER_PUBLIC_IP=$(terraform output bastion_public_ip --raw)

delete text file during terraform destroy

I am trying to delete generated Ansible inventory hosts a file from my local machine when executing terraform destroy.
When I run terraform apply I use provisioner "local-exec" to create hosts file which is being used later by ansible playbook that is called during the deployment.
provisioner "local-exec" {
command = "echo master ansible_host=${element((aws_instance.kubeadm-node.*.public_ip),0)} >> hosts"
}
Is it possible to make sure that the hosts file is deleted when I am deleting all the resources with terraform destroy?
What is the easiest approach to delete hosts file when executing terraform destroy?
Thanks for your help, please let me know if my explanation was not clear enough.
I would suggest using the local_file resource to handle the inventory file. This way we can easily manage the file as expected when apply or destroy is run.
Example:
resource "local_file" "ansible_inventory" {
filename = "./hosts"
file_permission = "0664"
directory_permission = "0755"
content = <<-EOT
master ansible_host=${element((aws_instance.kubeadm-node.*.public_ip),0)}
EOT
}
You could add another local-exec provisioner and set it to be used only when terraform destroy is run, e.g.:
provisioner "local-exec" {
command = "rm -f /path/to/file"
when = destroy
}
More information about using destroy time provisioners here [1].
[1] https://www.terraform.io/language/resources/provisioners/syntax#destroy-time-provisioners

Terraform - access root module script from child module

I have a ROOT_MODULE with main.tf:
#Root Module - Just run the script
resource "null_resource" "example" {
provisioner "local_exec" {
command = "./script.sh"
}
and script.sh:
echo "Hello world
now I have another directory elsewhere where I've created a CHILD_MODULE with another main.tf:
#Child Module
module "ROOT_MODULE" {
source = "gitlabURL/ROOT_MODULE"
}
I've exported my planfile: terraform plan -out="planfile"
however, when I do terraform apply against the planfile, the directory I am currently in no longer has any idea where the script.sh is. I need to keep the script in the same directory as the root module. This script is also inside a gitlab repository so I don't have a local path to call it. Any idea as to how I can get this script into my child module / execute it from my planfile?
Error running command './script.sh': exit status 1. Output: cannot access 'script.sh': No such file or directory
You can access the path to the root module config to preserve pathing for files with the path.root intrinsic:
provisioner "local_exec" {
command = "${path.root}/script.sh"
}
However, based on your question, it appears you have swapped the terminology for root module and child module. Therefore, that module appears to really be your child module and not root, and you need to access the path with the path.module intrinsic:
provisioner "local_exec" {
command = "${path.module}/script.sh"
}
and then the pathing to the script will be preserved regardless of your current working directory.
These intrinsic expressions are documented here.

Terraform local-exec Provisioner to run on multiple Azure virtual machines

I had a working TF setup to spin up multiple Linux VMs in Azure. I was running a local-exec provisioner in a null_resource to execute an Ansible playbook. I was extracting the private IP addresses from the TF state file. The state file was stored locally.
I have recently configured Azure backend and now the state file is stored in a storage account.
I have modified the local provisioner and am trying to obtain all the private IP addresses to run the Ansible playbook against, as follows:
resource "null_resource" "Ansible4Ubuntu" {
provisioner "local-exec" {
command = "sleep 20;ansible-playbook -i '${element(azurerm_network_interface.unic.*.private_ip_address, count.index)}', vmlinux-playbook.yml"
I have also tried:
resource "null_resource" "Ansible4Ubuntu" {
provisioner "local-exec" {
command = "sleep 20;ansible-playbook -i '${azurerm_network_interface.unic.private_ip_address}', vmlinux-playbook.yml"
They both work fine with the first VM only and ignores the rest. I have also tried with count.index+1 and self.private_ip_address, but no luck.
Actual result: TF provides the private IP of only the first VM to Ansible.
Expected result: TF to provide a list of all private IPs to Ansible so that it can run the playbook against all of them.
PS: I am also looking at using the TF's remote_state data structure, but seems like the state file contains IPs from previous builds as well, making it hard to extract the ones good for the current build.
I would appreciate any help.
Thanks
Asghar
As Matt said, the null_resource just run one time, so it just works fine with the first VM and ignores the rest. You need to configure triggers for the null_resource with the NIC list to make it run multiple times. Sample code like this:
resource "null_resource" "Ansible4Ubuntu" {
triggers = {
network_interface_ids = "${join(",", azurerm_network_interface.unic.*.id)}"
}
provisioner "local-exec" {
command = "sleep 20;ansible-playbook -i '${join(" ", azurerm_network_interface.unic.*.private_ip_address)}, vmlinux-playbook.yml"
}
}
You can change something in it as you want. For information, see null_resource.

How to set hostname with cloud-init and Terraform?

I am starting with Terraform. I am trying to make it set a friendly hostname, instead of the usual ip-10.10.10.10 that AWS uses. However, I haven't found how to do it.
I tried using provisioners, like this:
provisioner "local-exec" {
command = "sudo hostnamectl set-hostname friendly.example.com"
}
But that doesn't work, the hostname is not changed.
So now, I'm trying this:
resource "aws_instance" "example" {
ami = "ami-XXXXXXXX"
instance_type = "t2.micro"
tags = {
Name = "friendly.example.com"
}
user_data = "${data.template_file.user_data.rendered}"
}
data "template_file" "user_data" {
template = "${file("user-data.conf")}"
vars {
hostname = "${aws_instance.example.tags.Name}"
}
}
And in user-data.conf I have a line to use the variable, like so:
hostname = ${hostname}
But this gives me a cycle dependency:
$ terraform apply
Error: Error asking for user input: 1 error(s) occurred:
* Cycle: aws_instance.example, data.template_file.user_data
Plus, that would mean I have to create a different user_data resource for each instance, which seems a bit like a pain. Can you not reuse them? That should be the purpose of templates, right?
I must be missing something, but I can't find the answer.
Thanks.
Using a Terraform provisioner with the local-exec block will execute it on the device from which Terraform is applying: documentation. Note specifically the line:
This invokes a process on the machine running Terraform, not on the resource. See the remote-exec provisioner to run commands on the resource.
Therefore, switching the provisioner from a local-exec to a remote-exec:
provisioner "remote-exec" {
inline = ["sudo hostnamectl set-hostname friendly.example.com"]
}
should fix your issue with setting the hostname.
Since you are supplying the tag to the instance as a string, why not just make that a var?
Replace the string friendly.example.com with ${var.instance-name} in your instance resource and in your data template. Then set the var:
variable "instance-name" {
default="friendly.example.com"
}
I believe that your user-data.conf should be bash script, to start with #!/usr/bin/env bash.
It should look like
#!/usr/bin/env bash
hostname ${hostname}

Resources