Terraform remote-exec fails on vm resource and dhcp assinged IP - terraform

I'm attempting to run a remote-exec provisioner on a vsphere virtual machine resource in which the IP is being assigned through DHCP rather than through a static IP setting on the network adaptor (due to TF issues with Ubuntu 18.04).
I find that when trying to run the "remote-exec" provisioner it fails since it is unable to find the IP address. I've tried several things and am currently attempting to set the "host" property of the connection object to to "self.default_ip_address" in hopes that it will use the IP address that is being automatically assigned to the VM through DHCP once it connects to my network...Unfortunately I'm still not having any luck getting this to work.
Below is an example of my resource declaration, is there a better method to running remote-exec when using DHCP that I'm just not aware of am missing? I can't even seem to output the IP correctly after everything is built, even if I don't run the provisioner. Thanks for the help!
resource "vsphere_virtual_machine" "vm-nginx-2" {
resource_pool_id = "${data.vsphere_resource_pool.pool.id}"
name = "vm-nginx-2"
datastore_id = "${data.vsphere_datastore.datastore.id}"
folder = "${var.vsphere_vm_folder}"
enable_disk_uuid = true
wait_for_guest_net_timeout = 0
num_cpus = 2
memory = 2048
guest_id = "${data.vsphere_virtual_machine.template.guest_id}"
scsi_type = "${data.vsphere_virtual_machine.template.scsi_type}"
#scsi_type = "${data.vsphere_virtual_machine.template.scsi_type}"
network_interface {
network_id = "${data.vsphere_network.network.id}"
}
disk {
label = "vm-nginx-2-disk"
size = "${data.vsphere_virtual_machine.template.disks.0.size}"
}
clone {
template_uuid = "${data.vsphere_virtual_machine.template.id}"
customize {
timeout = 0
linux_options {
host_name = "vm-nginx-2"
domain = "adc-corp.com"
}
network_interface {}
ipv4_gateway = "192.168.0.1"
}
}
provisioner "remote-exec" {
inline = [
"sudo apt-get update -y",
"sudo apt-get install -y nginx"
]
connection {
host = "${self.default_ip_address}"
type = "ssh"
user = "ubuntu"
private_key = "${file("files/adc-prod.pem")}"
}
}
}
#This also fails to print out an IP
output "vm-nginx-1-ip" {
value = "${vsphere_virtual_machine.vm-nginx-1.default_ip_address}"
}

Related

Need help in using count in remote-exec provisioner to retrieve multiple VMs IPs

I want to use count to install a package on 2 of my VMs using a single remote-exec provisioner. As of now, I am doing that individually in 2 provisioners blocks as below.
----present code to use remote provisioner for 2 vms-----
resource "null_resource" "install_nginx_host1" {
provisioner "remote-exec" {
inline = [
"sudo apt install nginx -y"
]
}
connection {
type = "ssh"
user = "ubuntu"
private_key = file("~/.ssh/id_rsa")
host = module.virtual-machine[0].linux_vm_public_ips.instance-0
}
}
resource "null_resource" "install_nginx_host2" {
provisioner "remote-exec" {
inline = [
"sudo apt install nginx -y"
]
}
connection {
type = "ssh"
user = "ubuntu"
private_key = file("~/.ssh/id_rsa")
host = module.virtual-machine[1].linux_vm_public_ips.instance-1
}
}
Can someone please help me in getting the value which I should should to set host using count.index? I tried multiple things e.g.
host = "module.virtual-machine[${count.index}].linux_vm_public_ips.instance-${count.index}"
But it returns the host strings as:
module.virtual-machine[0].linux_vm_public_ips.instance-0
module.virtual-machine[1].linux_vm_public_ips.instance-1
while I want the value of above strings.
This should be pretty straightforward to achieve:
resource "null_resource" "install_nginx_host1" {
count = 2
provisioner "remote-exec" {
inline = [
"sudo apt install nginx -y"
]
}
connection {
type = "ssh"
user = "ubuntu"
private_key = file("~/.ssh/id_rsa")
host = module.virtual-machine[count.index].linux_vm_public_ips["instance-${count.index}"]
}
}
Please make sure you understand how to use the count meta-argument [1].
[1] https://www.terraform.io/language/meta-arguments/count

How to set linux root user password when provisioning GCP VM using terraform

How to set linux root user password when provisioning GCP VM using terraform
One option is to run shell command like below , but it is exposing password, is there any other way I can do it . Please adivse.
resource "google_compute_instance" "vm" {
name = "vm-test"
machine_type = "n1-standard-1"
zone = "us-central1-a"
disk {
image = "projects/centos-cloud/global/images/family/centos-stream-7"
}
# Local SSD disk
disk {
type = "local-ssd"
scratch = true
}
network_interface {
network = "myNetwork"
access_config {}
}
}
resource "null_resource" "cluster" {
provisioner "remote-exec" {
inline = [
"echo 'new123' | sudo passwd --stdin root",
]
connection {
host = google_compute_instance.vm.network_interface.0.access_config.0.nat_ip
type = "ssh"
user = var.user
private_key = file(var.Source_privatekeypath)
}
}
}

deploy a machine (with qcow2 image) on KVM automatically via Terraform

I am new to terraform and i am trying to deploy a machine (with qcow2 image) on KVM automatically via Terraform.
i found this tf file:
provider "libvirt" {
uri = "qemu:///system"
}
#provider "libvirt" {
# alias = "server2"
# uri = "qemu+ssh://root#192.168.100.10/system"
#}
resource "libvirt_volume" "centos7-qcow2" {
name = "centos7.qcow2"
pool = "default"
source = "https://cloud.centos.org/centos/7/images/CentOS-7-x86_64-GenericCloud.qcow2"
#source = "./CentOS-7-x86_64-GenericCloud.qcow2"
format = "qcow2"
}
# Define KVM domain to create
resource "libvirt_domain" "db1" {
name = "db1"
memory = "1024"
vcpu = 1
network_interface {
network_name = "default"
}
disk {
volume_id = "${libvirt_volume.centos7-qcow2.id}"
}
console {
type = "pty"
target_type = "serial"
target_port = "0"
}
graphics {
type = "spice"
listen_type = "address"
autoport = true
}
}
my questions are:
(source) the path of my qcow file has to be localy on my computer ?
I have a KVM machine ip that i connected it remotely by its ip. where should i put this ip in this tf file?
when i did it manually, i run "virt manager", do i need to write it here anywhere?
thank's a lot.
No. It can be https also.
Do you mean a KVM host that VMs will be created ? Then you need to configure remote kvm access on that host and in the uri section of provider block you need to write its ip.
uri = "qemu+ssh://username#IP_OF_HOST/system"
You dont need virt-manager when you use terraform. You should use terraform resources for managing VM.
https://registry.terraform.io/providers/dmacvicar/libvirt/latest/docs
https://github.com/dmacvicar/terraform-provider-libvirt/tree/main/examples/v0.13

Terraforms remote-exec on each host created

I am trying to set up a group of EC2s for an app using Terraform in AWS. After each server is created I want to mount the eNVM instance storage on each server using remote-exec. So create 3 servers and then mount the eNVM on each of the 3 servers
attempted to use null_resource but I am getting errors about 'resource depends on non-existent resource' or 'interpolation' errors
variable count {
default = 3
}
module "app-data-node" {
source = "some_git_source"
count = "${var.count}"
instance_size = "instance_data"
hostname_pattern = "app-data"
dns_domain = "${data.terraform_remote_state.network.dns_domain}"
key_name = "app-automation"
description = "Automation App Data Instance"
package_proxy = "${var.package_proxy}"
}
resource "null_resource" "mount_envm" {
# Only run this provisioner for app nodes
#count = "${var.count}"
depends_on = [
"null_resource.${module.app-data-node}"
]
connection {
host = "${aws_instance.i.*.private_ip[count.index]}"
user = "root"
private_key = "app-automation"
}
provisioner "remote-exec" {
inline = [
"sudo mkfs -t ext4 /dev/nvme0n1",
"sudo mkdir /data",
"sudo mount /dev/nvme0n1 /data"
]
}
}
3 EC2 instances each with eNVMs mounted on them.
You can use a null_resource to run the provisioner:
resource "null_resource" "provisioner" {
count = "${var.count}"
triggers {
master_id = "${element(aws_instance.my_instances.*.id, count.index)}"
}
connection {
#host = "${element(aws_instance.my_instances.*.private_ip, count.index)}"
host = "${element(aws_instance.my_instances.*.private_ip, count.index)}"
type = "ssh"
user = "..."
private_key = "..."
}
# set hostname
provisioner "remote-exec" {
inline = [
"sudo mkfs -t ext4 /dev/nvme0n1",
"sudo mkdir /data",
"sudo mount /dev/nvme0n1 /data"
]
}
}
This should do it for all instances at once as well.

Terraform openstack instance doesn't return floating ip

Im setting up and openstack instance using terraform. Im writing to a file the ip returned but for some reason its alwayse empty (i have looked at the instance in openstack consol and everythign is correct with ip, securitygroups etc etc)
resource "openstack_compute_instance_v2" "my-deployment-web" {
count = "1"
name = "my-name-WEB"
flavor_name = "m1.medium"
image_name = "RHEL7Secretname"
security_groups = [
"our_security_group"]
key_pair = "our-keypair"
network {
name = "public"
}
metadata {
expire = "2",
owner = ""
}
connection {
type = "ssh"
user = "vagrant"
private_key = "config/vagrant_private.key"
agent = "false"
timeout = "15m"
}
##Create Ansible host in staging inventory
provisioner "local-exec" {
command = "echo -e '\n[web]\n${openstack_compute_instance_v2.my-deployment-web.network.0.floating_ip}' > ../ansible/inventories/staging/hosts"
interpreter = ["sh", "-c"]
}
}
The host file generated only gets [web] but no ip. Anyone know why?
[web]
Modifying the variable from
${openstack_compute_instance_v2.my-deployment-web.network.0.floating_ip}
to
${openstack_compute_instance_v2.my-deployment-web.network.0.access_ip_v4}
solved the problem. Thank you #Matt Schuchard

Resources