Terraforms remote-exec on each host created - terraform

I am trying to set up a group of EC2s for an app using Terraform in AWS. After each server is created I want to mount the eNVM instance storage on each server using remote-exec. So create 3 servers and then mount the eNVM on each of the 3 servers
attempted to use null_resource but I am getting errors about 'resource depends on non-existent resource' or 'interpolation' errors
variable count {
default = 3
}
module "app-data-node" {
source = "some_git_source"
count = "${var.count}"
instance_size = "instance_data"
hostname_pattern = "app-data"
dns_domain = "${data.terraform_remote_state.network.dns_domain}"
key_name = "app-automation"
description = "Automation App Data Instance"
package_proxy = "${var.package_proxy}"
}
resource "null_resource" "mount_envm" {
# Only run this provisioner for app nodes
#count = "${var.count}"
depends_on = [
"null_resource.${module.app-data-node}"
]
connection {
host = "${aws_instance.i.*.private_ip[count.index]}"
user = "root"
private_key = "app-automation"
}
provisioner "remote-exec" {
inline = [
"sudo mkfs -t ext4 /dev/nvme0n1",
"sudo mkdir /data",
"sudo mount /dev/nvme0n1 /data"
]
}
}
3 EC2 instances each with eNVMs mounted on them.

You can use a null_resource to run the provisioner:
resource "null_resource" "provisioner" {
count = "${var.count}"
triggers {
master_id = "${element(aws_instance.my_instances.*.id, count.index)}"
}
connection {
#host = "${element(aws_instance.my_instances.*.private_ip, count.index)}"
host = "${element(aws_instance.my_instances.*.private_ip, count.index)}"
type = "ssh"
user = "..."
private_key = "..."
}
# set hostname
provisioner "remote-exec" {
inline = [
"sudo mkfs -t ext4 /dev/nvme0n1",
"sudo mkdir /data",
"sudo mount /dev/nvme0n1 /data"
]
}
}
This should do it for all instances at once as well.

Related

Execute bash script on Ubutnu using Terraform

Is it possible to execute shell commands on Ubuntu OS using Terraform script?
I have to do some initial configuration before execution of Terraform scripts.
you could define a local-exec provisioner in your resource
provisioner "local-exec" {
command = "echo The server's IP address is ${self.private_ip}"
}
that will execute right after the resource is created, there are other types of provisioners see: https://www.terraform.io/language/resources/provisioners/syntax
Depends upon where your Ubuntu OS is, if its local then you can do something like this
resource "aws_instance" "web" {
# ...
provisioner "local-exec" {
command = "echo ${self.private_ip} >> private_ips.txt"
}
}
If its a remote resource for example an aws ec2 instance:
resource "aws_instance" "web" {
# ...
# Establishes connection to be used by all
# generic remote provisioners (i.e. file/remote-exec)
connection {
type = "ssh"
user = "root"
password = var.root_password
host = self.public_ip
}
provisioner "remote-exec" {
inline = [
"puppet apply",
"consul join ${aws_instance.web.private_ip}",
]
}
}
Also, if its an ec2-instance, one thing that is mostly used is defining a script using user_data which runs immediately after the resource is created with root privileges but only once and then will never run even if you reboot the instance. In terraform you can do something like this:
resource "aws_instance" "server" {
ami = "ami-123456"
instance_type = "t3.medium"
availability_zone = "eu-central-1b"
vpc_security_group_ids = [aws_security_group.server.id]
subnet_id = var.subnet1
private_ip = var.private-ip
key_name = var.key_name
associate_public_ip_address = true
tags = {
Name = "db-server"
}
user_data = <<EOF
mkdir abc
apt update && apt install nano
EOF
}

Need help in using count in remote-exec provisioner to retrieve multiple VMs IPs

I want to use count to install a package on 2 of my VMs using a single remote-exec provisioner. As of now, I am doing that individually in 2 provisioners blocks as below.
----present code to use remote provisioner for 2 vms-----
resource "null_resource" "install_nginx_host1" {
provisioner "remote-exec" {
inline = [
"sudo apt install nginx -y"
]
}
connection {
type = "ssh"
user = "ubuntu"
private_key = file("~/.ssh/id_rsa")
host = module.virtual-machine[0].linux_vm_public_ips.instance-0
}
}
resource "null_resource" "install_nginx_host2" {
provisioner "remote-exec" {
inline = [
"sudo apt install nginx -y"
]
}
connection {
type = "ssh"
user = "ubuntu"
private_key = file("~/.ssh/id_rsa")
host = module.virtual-machine[1].linux_vm_public_ips.instance-1
}
}
Can someone please help me in getting the value which I should should to set host using count.index? I tried multiple things e.g.
host = "module.virtual-machine[${count.index}].linux_vm_public_ips.instance-${count.index}"
But it returns the host strings as:
module.virtual-machine[0].linux_vm_public_ips.instance-0
module.virtual-machine[1].linux_vm_public_ips.instance-1
while I want the value of above strings.
This should be pretty straightforward to achieve:
resource "null_resource" "install_nginx_host1" {
count = 2
provisioner "remote-exec" {
inline = [
"sudo apt install nginx -y"
]
}
connection {
type = "ssh"
user = "ubuntu"
private_key = file("~/.ssh/id_rsa")
host = module.virtual-machine[count.index].linux_vm_public_ips["instance-${count.index}"]
}
}
Please make sure you understand how to use the count meta-argument [1].
[1] https://www.terraform.io/language/meta-arguments/count

How to set linux root user password when provisioning GCP VM using terraform

How to set linux root user password when provisioning GCP VM using terraform
One option is to run shell command like below , but it is exposing password, is there any other way I can do it . Please adivse.
resource "google_compute_instance" "vm" {
name = "vm-test"
machine_type = "n1-standard-1"
zone = "us-central1-a"
disk {
image = "projects/centos-cloud/global/images/family/centos-stream-7"
}
# Local SSD disk
disk {
type = "local-ssd"
scratch = true
}
network_interface {
network = "myNetwork"
access_config {}
}
}
resource "null_resource" "cluster" {
provisioner "remote-exec" {
inline = [
"echo 'new123' | sudo passwd --stdin root",
]
connection {
host = google_compute_instance.vm.network_interface.0.access_config.0.nat_ip
type = "ssh"
user = var.user
private_key = file(var.Source_privatekeypath)
}
}
}

How to get remote-exec provisioner to apply after disk attachments?

I have a script that I need to run after my instance has been provisioned and the volumes have been attached:
resource "aws_instance" "controller" {
...
provisioner "remote-exec" {
connection {
type = "ssh"
user = "centos"
}
inline = [
"download and run script to verify environment"
]
}
}
resource "aws_ebs_volume" "controller-ebs-sdb" {
...
}
resource "aws_volume_attachment" "controller-volume-attachment-sdb" {
device_name = "/dev/sdb"
volume_id = "${aws_ebs_volume.controller-ebs-sdb.id}"
instance_id = "${aws_instance.controller.id}"
}
Currently the script is failing the environment because when it runs the volume has not been attached.
Is it possible to only run the remote-exec script after the volumes have been attached?
You can run a provisioner on any resource (consider the null_resource pattern for an extreme version of this) so the best thing here is to run it on the aws_volume_attachment resource:
# ...
resource "aws_volume_attachment" "controller-volume-attachment-sdb" {
device_name = "/dev/sdb"
volume_id = "${aws_ebs_volume.controller-ebs-sdb.id}"
instance_id = "${aws_instance.controller.id}"
provisioner "remote-exec" {
connection {
host = "${aws_instance.controller.public_ip}"
type = "ssh"
user = "centos"
}
inline = [
"download and run script to verify environment"
]
}
}
You can consider adding a trigger option in remote-exec. Other crude option is to add a sleep for some seconds or, the script can retry itself, or check the status/existence of the disk and then attempt.

Running local-exec provisioner on all EC2 instances after creation

I currently have a Terraform file to create EC2 instances on AWS that looks like this:
resource "aws_instance" "influxdata" {
count = "${var.ec2-count-influx-data}"
ami = "${module.amis.rhel73_id}"
instance_type = "${var.ec2-type-influx-data}"
vpc_security_group_ids = ["${var.sg-ids}"]
subnet_id = "${element(module.infra.subnet,count.index)}"
key_name = "${var.KeyName}"
iam_instance_profile = "Custom-role"
tags {
Name = "influx-data-node"
ASV = "${module.infra.ASV}"
CMDBEnvironment = "${module.infra.CMDBEnvironment}"
OwnerContact = "${module.infra.OwnerContact}"
custodian_downtime = "off"
OwnerEid = "${var.OwnerEid}"
}
ebs_block_device {
device_name = "/dev/sdg"
volume_size = 500
volume_type = "io1"
iops = 2000
encrypted = true
delete_on_termination = true
}
user_data = "${file("terraform/attach_ebs.sh")}"
connection {
private_key = "${file("/Users/usr111/Downloads/usr111_CD.pem")}"
user = "ec2-user"
}
provisioner "remote-exec" {
inline = ["echo just checking for ssh. ttyl. bye."]
}
provisioner "local-exec" {
command = <<EOF
ansible-playbook base-data.yml --key-file=/Users/usr111/Downloads/usr111_CD.pem --user=ec2-user -b -i "${self.private_ip},"
EOF
}
}
resource "aws_route53_record" "influx-data-route" {
count = "${var.ec2-count-influx-data}"
zone_id = "${var.r53-zone}"
name = "influx-data-0${count.index}"
type = "A"
ttl = "300"
// matches up record N to instance N
records = ["${element(aws_instance.influxdata.*.private_ip, count.index)}"]
}
resource "local_file" "inventory-meta" {
filename = "inventory"
content = <<-EOF
[meta]
${join("\n",aws_instance.influxmeta.*.private_ip)}
[data]
${join("\n",aws_instance.influxdata.*.private_ip)}
EOF
}
What I'm struggling to figure out is to get this part to run after I create the inventory file:
provisioner "local-exec" {
command = <<EOF
ansible-playbook base-data.yml --key-file=/Users/usr111/Downloads/usr111_CD.pem --user=ec2-user -b -i "${self.private_ip},"
EOF
}
Right now I'm passing an IP into Ansible but I want to pass in the inventory file, which is only created after Terraform provisions all of the instances.
since you are using AWS maybe you could try using the Dynamic Inventory script and your provisioner could look like this:
provisioner "local-exec" {
command = "ansible-playbook -i ec2.py playbook.yml --limit ${self.public_ip}" }
In your playbook you are going to need to wait for SSH to become available since Ansible is making the connection and not Terraform.
- name: wait for ssh
hosts: localhost
gather_facts: no
tasks:
- local_action: wait_for port=22 host="{{ ip }}" search_regex=OpenSSH delay=10
So the command should look like this:
provisioner "local-exec" {
command = "ansible-playbook -i ec2.py playbook.yml --limit ${self.public_ip}" --extra-vars 'ip=${self.public_ip}'}
You can also copy your playbooks to the host with the "File Provisioner", install ansible and run the playbook locally with "remote-exec", but that's up to you.

Resources