deploy a machine (with qcow2 image) on KVM automatically via Terraform - terraform

I am new to terraform and i am trying to deploy a machine (with qcow2 image) on KVM automatically via Terraform.
i found this tf file:
provider "libvirt" {
uri = "qemu:///system"
}
#provider "libvirt" {
# alias = "server2"
# uri = "qemu+ssh://root#192.168.100.10/system"
#}
resource "libvirt_volume" "centos7-qcow2" {
name = "centos7.qcow2"
pool = "default"
source = "https://cloud.centos.org/centos/7/images/CentOS-7-x86_64-GenericCloud.qcow2"
#source = "./CentOS-7-x86_64-GenericCloud.qcow2"
format = "qcow2"
}
# Define KVM domain to create
resource "libvirt_domain" "db1" {
name = "db1"
memory = "1024"
vcpu = 1
network_interface {
network_name = "default"
}
disk {
volume_id = "${libvirt_volume.centos7-qcow2.id}"
}
console {
type = "pty"
target_type = "serial"
target_port = "0"
}
graphics {
type = "spice"
listen_type = "address"
autoport = true
}
}
my questions are:
(source) the path of my qcow file has to be localy on my computer ?
I have a KVM machine ip that i connected it remotely by its ip. where should i put this ip in this tf file?
when i did it manually, i run "virt manager", do i need to write it here anywhere?
thank's a lot.

No. It can be https also.
Do you mean a KVM host that VMs will be created ? Then you need to configure remote kvm access on that host and in the uri section of provider block you need to write its ip.
uri = "qemu+ssh://username#IP_OF_HOST/system"
You dont need virt-manager when you use terraform. You should use terraform resources for managing VM.
https://registry.terraform.io/providers/dmacvicar/libvirt/latest/docs
https://github.com/dmacvicar/terraform-provider-libvirt/tree/main/examples/v0.13

Related

proxmox/terraform/cloud-init - incorrect ipconfig

I am trying to build a VM on my Proxmox (from a template I created w Packer), and all is well except it does not take the IP I specified, but gets one from DHCP.
This is my provider config:
# Proxmox Provider
# ---
# Initial Provider Configuration for Proxmox
terraform {
required_version = ">= 0.13.0"
required_providers {
proxmox = {
source = "telmate/proxmox"
version = "2.9.3"
}
}
}
variable "proxmox_api_url" {
type = string
}
variable "proxmox_api_token_id" {
type = string
}
variable "proxmox_api_token_secret" {
type = string
}
provider "proxmox" {
pm_api_url = var.proxmox_api_url
pm_api_token_id = var.proxmox_api_token_id
pm_api_token_secret = var.proxmox_api_token_secret
# (Optional) Skip TLS Verification
pm_tls_insecure = true
}
And this is my .tf
# Proxmox Full-Clone
# ---
# Create a new VM from a clone
resource "proxmox_vm_qemu" "doc-media-0" {
# VM General Settings
target_node = "proxmox01"
vmid = "100"
name = "doc-media-0"
desc = "Docker media server running on Ubuntu"
# VM Advanced General Settings
onboot = true
# VM OS Settings
clone = "ubuntu-server-jammy-docker"
# The destination resource pool for the new VM
pool = "prod"
# VM System Settings
agent = 1
# VM CPU Settings
cores = 3
sockets = 2
cpu = "host"
# Storage settings
disk {
/* id = 0 */
type = "virtio"
storage = "data-fast"
/* storage_type = "directory" */
size = "20G"
/* backup = true */
}
# VM Memory Settings
memory = 10240
# VM Network Settings
network {
bridge = "vmbr0"
model = "virtio"
}
# VM Cloud-Init Settings
os_type = "cloud-init"
# (Optional) IP Address and Gateway
ipconfig0 = "ip=192.168.1.20/16,gw=192.168.1.1"
# (Optional) Name servers
nameserver = "192.168.1.1"
# (Optional) Default User
ciuser = "fabrice"
# (Optional) Add your SSH KEY
sshkeys = <<EOF
ssh-ed25519 <publick-ssh-key-removed>
EOF
}
Expected result
IP is 192.168.1.20
by virtue of ipconfig0 = "ip=192.168.1.20/16,gw=192.168.1.1"
Actual result
VM got a DHCP address
What is odd, the other settings applied, so my gateway is correct, my user is there, and my publick ssh key
Ignore me; I think it done it just not right after the tf completion.

How to create multiple Vms using terraform libvirt provider?

I just want to know How to create multiple Vms using terraform libvirt provider ?
I managed to create one vm , is there a way to do it in one tf file
# libvirt.tf
# add the provider
provider "libvirt" {
uri = "qemu:///system"
}
#create pool
resource "libvirt_pool" "ubuntu" {
name = "ubuntu-pool"
type = "dir"
path = "/libvirt_images/ubuntu_pool/"
}
# create image
resource "libvirt_volume" "image-qcow2" {
name = "ubuntu-amd64.qcow2"
pool = libvirt_pool.ubuntu.name
source ="${path.module}/downloads/bionic-server-cloudimg-amd64.img"
format = "qcow2"
}
I used cloud config to define the user and ssh connection
I want to complete this code so I can create multiple VM

How to pull and deploy a docker image from azure container registry using Terraform?

I am new to using Terraform/Azure and I have been trying to work on a automation job since few days, which has wasted a lot of my time as I couldn't find any solutions on internet for the same.
So, if anyone knows how to pull and deploy a docker image from an azure container registry using Terraform then please do share the details for the same, any assistance will be most appreciated.
A sample code snippet word be of great assistance.
You can use the docker provider and docker_image resource from terraform which pulls the image to your local docker registry. There are some Authentication options to authenticate to ACR.
provider "docker" {
host = "unix:///var/run/docker.sock"
registry_auth {
address = "<ACR_NAME>.azurecr.io"
username = "<DOCKER_USERNAME>"
password = "<DOCKER_PASSWORD>"
}
}
resource "docker_image" "my_image" {
name = "<ACR_NAME>.azurecr.io/<IMAGE>:<TAG>"
}
output "image_id" {
value = docker_image.my_image.name
}
Then use the image to deploy it with docker_container resource. I have not used this resource so I cannot give you a snippet, but there are some examples on the documentation.
One option to pull (and run) an image from ACR with Terraform is using Container Instances. Simple example for image reference:
resource "azurerm_container_group" "example" {
name = "my-continst"
location = "location"
resource_group_name = "name"
ip_address_type = "public"
dns_name_label = "aci-uniquename"
os_type = "Linux"
container {
name = "hello-world"
image = "acr-name.azurecr.io/aci-helloworld:v1"
cpu = "0.5"
memory = "1.5"
ports {
port = 80
protocol = "TCP"
}
}
image_registry_credential {
server = "acr-name.azurecr.io"
username = ""
password = ""
}
}

Terraform remote-exec fails on vm resource and dhcp assinged IP

I'm attempting to run a remote-exec provisioner on a vsphere virtual machine resource in which the IP is being assigned through DHCP rather than through a static IP setting on the network adaptor (due to TF issues with Ubuntu 18.04).
I find that when trying to run the "remote-exec" provisioner it fails since it is unable to find the IP address. I've tried several things and am currently attempting to set the "host" property of the connection object to to "self.default_ip_address" in hopes that it will use the IP address that is being automatically assigned to the VM through DHCP once it connects to my network...Unfortunately I'm still not having any luck getting this to work.
Below is an example of my resource declaration, is there a better method to running remote-exec when using DHCP that I'm just not aware of am missing? I can't even seem to output the IP correctly after everything is built, even if I don't run the provisioner. Thanks for the help!
resource "vsphere_virtual_machine" "vm-nginx-2" {
resource_pool_id = "${data.vsphere_resource_pool.pool.id}"
name = "vm-nginx-2"
datastore_id = "${data.vsphere_datastore.datastore.id}"
folder = "${var.vsphere_vm_folder}"
enable_disk_uuid = true
wait_for_guest_net_timeout = 0
num_cpus = 2
memory = 2048
guest_id = "${data.vsphere_virtual_machine.template.guest_id}"
scsi_type = "${data.vsphere_virtual_machine.template.scsi_type}"
#scsi_type = "${data.vsphere_virtual_machine.template.scsi_type}"
network_interface {
network_id = "${data.vsphere_network.network.id}"
}
disk {
label = "vm-nginx-2-disk"
size = "${data.vsphere_virtual_machine.template.disks.0.size}"
}
clone {
template_uuid = "${data.vsphere_virtual_machine.template.id}"
customize {
timeout = 0
linux_options {
host_name = "vm-nginx-2"
domain = "adc-corp.com"
}
network_interface {}
ipv4_gateway = "192.168.0.1"
}
}
provisioner "remote-exec" {
inline = [
"sudo apt-get update -y",
"sudo apt-get install -y nginx"
]
connection {
host = "${self.default_ip_address}"
type = "ssh"
user = "ubuntu"
private_key = "${file("files/adc-prod.pem")}"
}
}
}
#This also fails to print out an IP
output "vm-nginx-1-ip" {
value = "${vsphere_virtual_machine.vm-nginx-1.default_ip_address}"
}

Deploy CoreOS virtual machine on vSphere with Terraform

I'm having a really difficult time trying to deploy a CoreOS virtual machine on vsphere using Terraform.
So far this is the terraform file I'm using:
# Configure the VMware vSphere Provider. ENV Variables set for Username and Passwd.
provider "vsphere" {
vsphere_server = "192.168.105.10"
allow_unverified_ssl = true
}
provider "ignition" {
version = "1.0.0"
}
data "vsphere_datacenter" "dc" {
name = "Datacenter"
}
data "vsphere_datastore" "datastore" {
name = "vol_af01_idvms"
datacenter_id = "${data.vsphere_datacenter.dc.id}"
}
data "vsphere_resource_pool" "pool" {
name = "Cluster_rnd/Resources"
datacenter_id = "${data.vsphere_datacenter.dc.id}"
}
data "vsphere_network" "network" {
name = "VM Network"
datacenter_id = "${data.vsphere_datacenter.dc.id}"
}
data "vsphere_virtual_machine" "template" {
name = "coreos_production"
datacenter_id = "${data.vsphere_datacenter.dc.id}"
}
# Create a folder
resource "vsphere_folder" "TestPath" {
datacenter_id = "${data.vsphere_datacenter.dc.id}"
path = "Test"
type = "vm"
}
#Define ignition data
data "ignition_networkd_unit" "vmnetwork" {
name = "00-ens192.network"
content = <<EOF
[Match]
Name=ens192
[Network]
DNS=8.8.8.8
Address=192.168.105.27/24
Gateway=192.168.105.1
EOF
}
data "ignition_config" "node" {
networkd = [
"${data.ignition_networkd_unit.vmnetwork.id}"
]
}
# Define the VM resource
resource "vsphere_virtual_machine" "vm" {
name = "terraform-test"
folder = "${vsphere_folder.TestPath.path}"
resource_pool_id = "${data.vsphere_resource_pool.pool.id}"
datastore_id = "${data.vsphere_datastore.datastore.id}"
num_cpus = 2
memory = 1024
guest_id = "other26xLinux64Guest"
network_interface {
network_id = "${data.vsphere_network.network.id}"
}
disk {
name = "terraform-test.vmdk"
size = "${data.vsphere_virtual_machine.template.disks.0.size}"
eagerly_scrub = "${data.vsphere_virtual_machine.template.disks.0.eagerly_scrub}"
thin_provisioned = "${data.vsphere_virtual_machine.template.disks.0.thin_provisioned}"
}
clone {
template_uuid = "${data.vsphere_virtual_machine.template.id}"
}
extra_config {
guestinfo.coreos.config.data.encoding = "base64"
guestinfo.coreos.config.data = "${base64encode(data.ignition_config.node.rendered)}"
}
}
I'm using terraform vsphere provier to create the virtual machine and ignition provider to pass customization details of the virtual machine such as network configuration.
It is not quite clear to me if I'm using correctly the extra_config property on the virtual machine definition. You can find documentation about that property here.
Virtual machine gets created, but network settings never are applied, meaning that ignition provisioning is not correctly working.
I would appreciate any guidance on how to properly configure Terraform for this particular scenario (Vsphere environment and CoreOS virtual machine), specially regarding guestinfo configuration.
Terraform v0.11.1, provider.ignition v1.0.0, provider.vsphere v1.1.0
VMware ESXi, 6.5.0, 5310538
CoreOS 1520.0.0
EDIT (2018-03-02)
As of version 1.3.0 of terraform vsphere provider, there is available a new vApp property. Using this property, there is no need to tweak the virtual machine using VMware PowerCLI as I did in the first answer.
There is a complete example of using this property here
The machine definition now would look something like this:
...
clone {
template_uuid = "${data.vsphere_virtual_machine.template.id}"
}
vapp {
properties {
"guestinfo.coreos.config.data.encoding" = "base64"
"guestinfo.coreos.config.data" = "${base64encode(data.ignition_config.node.rendered)}"
}
...
OLD ANSWER
Finally got this working.
The workflow I have used to create a CoreOS machine on vSphere using Terraform is as follows:
Download the latest Container Linux Stable OVA from
https://stable.release.core-os.net/amd64-usr/current/coreos_production_vmware_ova.ova.
Import coreos_production_vmware_ova.ova into vCenter.
Edit machine settings as desired (number of CPUs, disk size etc.)
Disable "vApp Options" of virtual machine.
Convert the virtual machine into a virtual machine template.
Once you have done this, you've got a CoreOS virtual machine template that is ready to be used with Terraform.
As I said in a comment to the question, some days ago I found this and that lead me to understand that my problem could be related to not being able to perform step 4.
The thing is that to be able to disable "vApp Options" (i.e. to see in the UI the "vApp Options" tab of the virtual machine) you need DSR enabled in your vSphere cluster, and, to be able to enable DSR, your hosts must be licensed with a key that supports DRS. Mine weren't, so I was stuck in that 4th step.
I wrote to VMware support, and they told me an alternate way to do this, without having to buy a different license.
This can be done using VMware PowerCLI. Here are the steps to install PowerCLI, and here is the reference. Once you got PowerCLI installed, this is the script I used to disable "vApp Options" in my machines:
Import-Module VMware.PowerCLI
#connect to vcenter
Connect-VIServer -Server yourvCenter -User yourUser -Password yourPassword
#Use this to disable the vApp functionality.
$disablespec = New-Object VMware.Vim.VirtualMachineConfigSpec
$disablespec.vAppConfigRemoved = $True
#Use this to enable
$enablespec = New-Object VMware.Vim.VirtualMachineConfigSpec
$enablespec.vAppConfig = New-Object VMware.Vim.VmConfigSpec
#Get the VM you want to work against.
$VM = Get-VM yourTemplate | Get-View
#Disables vApp Options
$VM.ReconfigVM($disablespec)
#Enables vApp Options
$VM.ReconfigVM($enablespec)
I executed that on a Powershell and managed to reconfigure the virtual machine, performing that 4th step. With this, I finally got my CoreOS virtual machine template correctly configured for this scenario.
I've tested this with terraform vSphere provider versions v0.4.2 and v1.1.0 (the syntax changes) and the machine gets created correctly; Ignition provisioning works and everything you put on your ignition file (network configs, users etc.) is applied on the newly created machine.

Resources