How to set hostname with cloud-init and Terraform? - terraform

I am starting with Terraform. I am trying to make it set a friendly hostname, instead of the usual ip-10.10.10.10 that AWS uses. However, I haven't found how to do it.
I tried using provisioners, like this:
provisioner "local-exec" {
command = "sudo hostnamectl set-hostname friendly.example.com"
}
But that doesn't work, the hostname is not changed.
So now, I'm trying this:
resource "aws_instance" "example" {
ami = "ami-XXXXXXXX"
instance_type = "t2.micro"
tags = {
Name = "friendly.example.com"
}
user_data = "${data.template_file.user_data.rendered}"
}
data "template_file" "user_data" {
template = "${file("user-data.conf")}"
vars {
hostname = "${aws_instance.example.tags.Name}"
}
}
And in user-data.conf I have a line to use the variable, like so:
hostname = ${hostname}
But this gives me a cycle dependency:
$ terraform apply
Error: Error asking for user input: 1 error(s) occurred:
* Cycle: aws_instance.example, data.template_file.user_data
Plus, that would mean I have to create a different user_data resource for each instance, which seems a bit like a pain. Can you not reuse them? That should be the purpose of templates, right?
I must be missing something, but I can't find the answer.
Thanks.

Using a Terraform provisioner with the local-exec block will execute it on the device from which Terraform is applying: documentation. Note specifically the line:
This invokes a process on the machine running Terraform, not on the resource. See the remote-exec provisioner to run commands on the resource.
Therefore, switching the provisioner from a local-exec to a remote-exec:
provisioner "remote-exec" {
inline = ["sudo hostnamectl set-hostname friendly.example.com"]
}
should fix your issue with setting the hostname.

Since you are supplying the tag to the instance as a string, why not just make that a var?
Replace the string friendly.example.com with ${var.instance-name} in your instance resource and in your data template. Then set the var:
variable "instance-name" {
default="friendly.example.com"
}

I believe that your user-data.conf should be bash script, to start with #!/usr/bin/env bash.
It should look like
#!/usr/bin/env bash
hostname ${hostname}

Related

terraform - failing to set environment variable using local-exec

As part of terraform run I'm trying to set environment variables on my Linux server using "local-exec" and command (I need to use it with a different application)
resource "null_resource" "set_env3" {
provisioner "local-exec" {
command = "export BASTION_SERVER_PUBLIC_IP=2.2.2.2"
}
}
But when running "echo $BASTION_SERVER_PUBLIC_IP" on my Linux server i'm getting an empty output and I also can't locate BASTION_SERVER_PUBLIC_IP parameter when running "printenv"
BTW - I have tried to run the following - but again i can not find the parameter
resource "null_resource" "update34" {
provisioner "local-exec" {
command = "env"
environment = {
BASTION = "5.5.5.5"
}
}
}
An export like that just exports the environment variable so it is available to other child processes spawned by the terraform process. It doesn't export it to the parent, Unix shell process. In short: this is never going to work. You can't use Terraform null_resource to set your local computer's environment variables.
What you need to do is define the value as a Terraform output. Then after you run terraform apply you do something like the following:
export BASTION_SERVER_PUBLIC_IP=$(terraform output bastion_public_ip --raw)

delete text file during terraform destroy

I am trying to delete generated Ansible inventory hosts a file from my local machine when executing terraform destroy.
When I run terraform apply I use provisioner "local-exec" to create hosts file which is being used later by ansible playbook that is called during the deployment.
provisioner "local-exec" {
command = "echo master ansible_host=${element((aws_instance.kubeadm-node.*.public_ip),0)} >> hosts"
}
Is it possible to make sure that the hosts file is deleted when I am deleting all the resources with terraform destroy?
What is the easiest approach to delete hosts file when executing terraform destroy?
Thanks for your help, please let me know if my explanation was not clear enough.
I would suggest using the local_file resource to handle the inventory file. This way we can easily manage the file as expected when apply or destroy is run.
Example:
resource "local_file" "ansible_inventory" {
filename = "./hosts"
file_permission = "0664"
directory_permission = "0755"
content = <<-EOT
master ansible_host=${element((aws_instance.kubeadm-node.*.public_ip),0)}
EOT
}
You could add another local-exec provisioner and set it to be used only when terraform destroy is run, e.g.:
provisioner "local-exec" {
command = "rm -f /path/to/file"
when = destroy
}
More information about using destroy time provisioners here [1].
[1] https://www.terraform.io/language/resources/provisioners/syntax#destroy-time-provisioners

Provisioner local-exec: 'always_run' trigger doesn't work as expected

In my terraform I have mysql module as follows:
# create ssh tunnel to RDS instance
resource "null_resource" "ssh_tunnel" {
provisioner "local-exec" {
command = "ssh -i ${var.private_key} -L 3306:${var.rds_endpoint} -fN ec2-user#${var.bastion_ip} -v >./stdout.log 2>./stderr.log"
}
triggers = {
always_run = timestamp()
}
}
# create database
resource "mysql_database" "rds" {
name = var.db_name
depends_on = [null_resource.ssh_tunnel]
}
When I'm adding new module and running terraform apply first time it works as expected.
But when terraform apply running without any changes I'm getting an error:
Could not connect to server: dial tcp 127.0.0.1:3306: connect: connection refused
If I understand correctly, provisioner "local-exec" should execute script every time due to the trigger settings. Could you explain how should it works properly?
I suspect that this happens because your first local-exec creates the tunnel in the background (-f). Then second execution fails because the first tunnel still exists. You do not close it at all in your code. You would have to extend your code to check for an existence of tunnels and maybe properly close them when you are done using them.
Finally I've implemented this solution https://registry.terraform.io/modules/flaupretre/tunnel/ssh/latest instead of using null_resource.

Terraform local-exec Provisioner to run on multiple Azure virtual machines

I had a working TF setup to spin up multiple Linux VMs in Azure. I was running a local-exec provisioner in a null_resource to execute an Ansible playbook. I was extracting the private IP addresses from the TF state file. The state file was stored locally.
I have recently configured Azure backend and now the state file is stored in a storage account.
I have modified the local provisioner and am trying to obtain all the private IP addresses to run the Ansible playbook against, as follows:
resource "null_resource" "Ansible4Ubuntu" {
provisioner "local-exec" {
command = "sleep 20;ansible-playbook -i '${element(azurerm_network_interface.unic.*.private_ip_address, count.index)}', vmlinux-playbook.yml"
I have also tried:
resource "null_resource" "Ansible4Ubuntu" {
provisioner "local-exec" {
command = "sleep 20;ansible-playbook -i '${azurerm_network_interface.unic.private_ip_address}', vmlinux-playbook.yml"
They both work fine with the first VM only and ignores the rest. I have also tried with count.index+1 and self.private_ip_address, but no luck.
Actual result: TF provides the private IP of only the first VM to Ansible.
Expected result: TF to provide a list of all private IPs to Ansible so that it can run the playbook against all of them.
PS: I am also looking at using the TF's remote_state data structure, but seems like the state file contains IPs from previous builds as well, making it hard to extract the ones good for the current build.
I would appreciate any help.
Thanks
Asghar
As Matt said, the null_resource just run one time, so it just works fine with the first VM and ignores the rest. You need to configure triggers for the null_resource with the NIC list to make it run multiple times. Sample code like this:
resource "null_resource" "Ansible4Ubuntu" {
triggers = {
network_interface_ids = "${join(",", azurerm_network_interface.unic.*.id)}"
}
provisioner "local-exec" {
command = "sleep 20;ansible-playbook -i '${join(" ", azurerm_network_interface.unic.*.private_ip_address)}, vmlinux-playbook.yml"
}
}
You can change something in it as you want. For information, see null_resource.

terraform local-exec command for executing mysql script

I created the aws_db_instance to provision the RDS MySQL database using Terraform configuration. I need to execute the SQL Scripts (CREATE TABLE and INSERT statements) on the RDS. I'm stuck on what command to use here?? Anyone has the sample code in my use case? Please advise. Thanks.
resource "aws_db_instance" "mydb" {
# ...
provisioner "local-exec" {
command = "command to execute script.sql"
}
}
This is possible using a null_resource that depends on the aws_db_instance.my_db. This way the host is available when you run the command, it will only work if there aren't rules preventing you from accessing the DB such as security group ingress or not publicly accessible.
Example:
resource "null_resource" "setup_db" {
depends_on = ["aws_db_instance.my_db"] #wait for the db to be ready
provisioner "local-exec" {
command = "mysql -u ${aws_db_instance.my_db.username} -p${var.my_db_password} -h ${aws_db_instance.my_db.address} < file.sql"
}
}
I don't believe you can use a provisioner with that type of resource. One option you could explore is having an additional step that takes the address of the RDS instance from a Terraform output and runs the SQL script.
So, for instance in a CI environment, you'd have Create Database -> Load Database -> Finished.
Below would be you Terraform to create and output the resource address.
resource "aws_db_instance" "mydb" {
# ...
provisioner "local-exec" {
command = "command to execute script.sql"
}
}
output "username" {
value = "${aws_db_instance.mydb.username}"
}
output "address" {
value = "${aws_db_instance.mydb.address}"
}
The Load Database step would then run a shell script with the SQL logic and the following to obtain the address of the instance - terraform output address

Resources