Provisioner local-exec: 'always_run' trigger doesn't work as expected - terraform

In my terraform I have mysql module as follows:
# create ssh tunnel to RDS instance
resource "null_resource" "ssh_tunnel" {
provisioner "local-exec" {
command = "ssh -i ${var.private_key} -L 3306:${var.rds_endpoint} -fN ec2-user#${var.bastion_ip} -v >./stdout.log 2>./stderr.log"
}
triggers = {
always_run = timestamp()
}
}
# create database
resource "mysql_database" "rds" {
name = var.db_name
depends_on = [null_resource.ssh_tunnel]
}
When I'm adding new module and running terraform apply first time it works as expected.
But when terraform apply running without any changes I'm getting an error:
Could not connect to server: dial tcp 127.0.0.1:3306: connect: connection refused
If I understand correctly, provisioner "local-exec" should execute script every time due to the trigger settings. Could you explain how should it works properly?

I suspect that this happens because your first local-exec creates the tunnel in the background (-f). Then second execution fails because the first tunnel still exists. You do not close it at all in your code. You would have to extend your code to check for an existence of tunnels and maybe properly close them when you are done using them.

Finally I've implemented this solution https://registry.terraform.io/modules/flaupretre/tunnel/ssh/latest instead of using null_resource.

Related

delete text file during terraform destroy

I am trying to delete generated Ansible inventory hosts a file from my local machine when executing terraform destroy.
When I run terraform apply I use provisioner "local-exec" to create hosts file which is being used later by ansible playbook that is called during the deployment.
provisioner "local-exec" {
command = "echo master ansible_host=${element((aws_instance.kubeadm-node.*.public_ip),0)} >> hosts"
}
Is it possible to make sure that the hosts file is deleted when I am deleting all the resources with terraform destroy?
What is the easiest approach to delete hosts file when executing terraform destroy?
Thanks for your help, please let me know if my explanation was not clear enough.
I would suggest using the local_file resource to handle the inventory file. This way we can easily manage the file as expected when apply or destroy is run.
Example:
resource "local_file" "ansible_inventory" {
filename = "./hosts"
file_permission = "0664"
directory_permission = "0755"
content = <<-EOT
master ansible_host=${element((aws_instance.kubeadm-node.*.public_ip),0)}
EOT
}
You could add another local-exec provisioner and set it to be used only when terraform destroy is run, e.g.:
provisioner "local-exec" {
command = "rm -f /path/to/file"
when = destroy
}
More information about using destroy time provisioners here [1].
[1] https://www.terraform.io/language/resources/provisioners/syntax#destroy-time-provisioners

How run remote-exec provisioner on destroy of more than one instances

I am using terraform to setup a docker swarm cluster on OpenStack along with using Ansible for configuration on newly created VMs. I want to perform first docker swarm leave on a VM which is going to be removed when I decrease the number of instances(VMs) and apply changes via terraform apply. It works when I destroy instance one by one but when 2 instances at once then it gives an error.
Error: Cycle: module.swarm_cluster.openstack_compute_instance_v2.swarm-cluster-hosts[3] (destroy), module.swarm_cluster.openstack_compute_instance_v2.swarm-cluster-hosts[2] (destroy)
Here is script:
resource "openstack_compute_instance_v2" "my_cluster"{
provisioner "remote-exec" {
when = destroy
inline = [ "sudo docker swarm leave" ]
}
connection {
type = "ssh"
user = var.ansible_user
timeout = "3m"
private_key = var.private_ssh_key
host = self.access_ip_v4
}
}
terraform :0.12

Terraform local-exec Provisioner to run on multiple Azure virtual machines

I had a working TF setup to spin up multiple Linux VMs in Azure. I was running a local-exec provisioner in a null_resource to execute an Ansible playbook. I was extracting the private IP addresses from the TF state file. The state file was stored locally.
I have recently configured Azure backend and now the state file is stored in a storage account.
I have modified the local provisioner and am trying to obtain all the private IP addresses to run the Ansible playbook against, as follows:
resource "null_resource" "Ansible4Ubuntu" {
provisioner "local-exec" {
command = "sleep 20;ansible-playbook -i '${element(azurerm_network_interface.unic.*.private_ip_address, count.index)}', vmlinux-playbook.yml"
I have also tried:
resource "null_resource" "Ansible4Ubuntu" {
provisioner "local-exec" {
command = "sleep 20;ansible-playbook -i '${azurerm_network_interface.unic.private_ip_address}', vmlinux-playbook.yml"
They both work fine with the first VM only and ignores the rest. I have also tried with count.index+1 and self.private_ip_address, but no luck.
Actual result: TF provides the private IP of only the first VM to Ansible.
Expected result: TF to provide a list of all private IPs to Ansible so that it can run the playbook against all of them.
PS: I am also looking at using the TF's remote_state data structure, but seems like the state file contains IPs from previous builds as well, making it hard to extract the ones good for the current build.
I would appreciate any help.
Thanks
Asghar
As Matt said, the null_resource just run one time, so it just works fine with the first VM and ignores the rest. You need to configure triggers for the null_resource with the NIC list to make it run multiple times. Sample code like this:
resource "null_resource" "Ansible4Ubuntu" {
triggers = {
network_interface_ids = "${join(",", azurerm_network_interface.unic.*.id)}"
}
provisioner "local-exec" {
command = "sleep 20;ansible-playbook -i '${join(" ", azurerm_network_interface.unic.*.private_ip_address)}, vmlinux-playbook.yml"
}
}
You can change something in it as you want. For information, see null_resource.

How to set hostname with cloud-init and Terraform?

I am starting with Terraform. I am trying to make it set a friendly hostname, instead of the usual ip-10.10.10.10 that AWS uses. However, I haven't found how to do it.
I tried using provisioners, like this:
provisioner "local-exec" {
command = "sudo hostnamectl set-hostname friendly.example.com"
}
But that doesn't work, the hostname is not changed.
So now, I'm trying this:
resource "aws_instance" "example" {
ami = "ami-XXXXXXXX"
instance_type = "t2.micro"
tags = {
Name = "friendly.example.com"
}
user_data = "${data.template_file.user_data.rendered}"
}
data "template_file" "user_data" {
template = "${file("user-data.conf")}"
vars {
hostname = "${aws_instance.example.tags.Name}"
}
}
And in user-data.conf I have a line to use the variable, like so:
hostname = ${hostname}
But this gives me a cycle dependency:
$ terraform apply
Error: Error asking for user input: 1 error(s) occurred:
* Cycle: aws_instance.example, data.template_file.user_data
Plus, that would mean I have to create a different user_data resource for each instance, which seems a bit like a pain. Can you not reuse them? That should be the purpose of templates, right?
I must be missing something, but I can't find the answer.
Thanks.
Using a Terraform provisioner with the local-exec block will execute it on the device from which Terraform is applying: documentation. Note specifically the line:
This invokes a process on the machine running Terraform, not on the resource. See the remote-exec provisioner to run commands on the resource.
Therefore, switching the provisioner from a local-exec to a remote-exec:
provisioner "remote-exec" {
inline = ["sudo hostnamectl set-hostname friendly.example.com"]
}
should fix your issue with setting the hostname.
Since you are supplying the tag to the instance as a string, why not just make that a var?
Replace the string friendly.example.com with ${var.instance-name} in your instance resource and in your data template. Then set the var:
variable "instance-name" {
default="friendly.example.com"
}
I believe that your user-data.conf should be bash script, to start with #!/usr/bin/env bash.
It should look like
#!/usr/bin/env bash
hostname ${hostname}

terraform local-exec command for executing mysql script

I created the aws_db_instance to provision the RDS MySQL database using Terraform configuration. I need to execute the SQL Scripts (CREATE TABLE and INSERT statements) on the RDS. I'm stuck on what command to use here?? Anyone has the sample code in my use case? Please advise. Thanks.
resource "aws_db_instance" "mydb" {
# ...
provisioner "local-exec" {
command = "command to execute script.sql"
}
}
This is possible using a null_resource that depends on the aws_db_instance.my_db. This way the host is available when you run the command, it will only work if there aren't rules preventing you from accessing the DB such as security group ingress or not publicly accessible.
Example:
resource "null_resource" "setup_db" {
depends_on = ["aws_db_instance.my_db"] #wait for the db to be ready
provisioner "local-exec" {
command = "mysql -u ${aws_db_instance.my_db.username} -p${var.my_db_password} -h ${aws_db_instance.my_db.address} < file.sql"
}
}
I don't believe you can use a provisioner with that type of resource. One option you could explore is having an additional step that takes the address of the RDS instance from a Terraform output and runs the SQL script.
So, for instance in a CI environment, you'd have Create Database -> Load Database -> Finished.
Below would be you Terraform to create and output the resource address.
resource "aws_db_instance" "mydb" {
# ...
provisioner "local-exec" {
command = "command to execute script.sql"
}
}
output "username" {
value = "${aws_db_instance.mydb.username}"
}
output "address" {
value = "${aws_db_instance.mydb.address}"
}
The Load Database step would then run a shell script with the SQL logic and the following to obtain the address of the instance - terraform output address

Resources