Been looking at the Terraform vSphere Provider Documentation with regards to deploying Windows Servers Using Custom OS Guest Settings: All i can see from it's site is the following.
customize {
linux_options / windows_option{
host_name = "terraform-test"
domain = "test.internal"
}
The Above creates a new Customs Guest Setting, What i'm looking for is to deploy using one that is already configured.
OS Custom Guest Settings
disk {
label = "${var.Variable_VM_Name}_disk0"
size = "${data.vsphere_virtual_machine.template.disks.0.size}"
eagerly_scrub = "${data.vsphere_virtual_machine.template.disks.0.eagerly_scrub}"
thin_provisioned = "${data.vsphere_virtual_machine.template.disks.0.thin_provisioned}"
}
clone {
template_uuid = "${data.vsphere_virtual_machine.template.id}"
}
customize {
windows_option{ TO-DO }
}
Related
I am new to terraform and i am trying to deploy a machine (with qcow2 image) on KVM automatically via Terraform.
i found this tf file:
provider "libvirt" {
uri = "qemu:///system"
}
#provider "libvirt" {
# alias = "server2"
# uri = "qemu+ssh://root#192.168.100.10/system"
#}
resource "libvirt_volume" "centos7-qcow2" {
name = "centos7.qcow2"
pool = "default"
source = "https://cloud.centos.org/centos/7/images/CentOS-7-x86_64-GenericCloud.qcow2"
#source = "./CentOS-7-x86_64-GenericCloud.qcow2"
format = "qcow2"
}
# Define KVM domain to create
resource "libvirt_domain" "db1" {
name = "db1"
memory = "1024"
vcpu = 1
network_interface {
network_name = "default"
}
disk {
volume_id = "${libvirt_volume.centos7-qcow2.id}"
}
console {
type = "pty"
target_type = "serial"
target_port = "0"
}
graphics {
type = "spice"
listen_type = "address"
autoport = true
}
}
my questions are:
(source) the path of my qcow file has to be localy on my computer ?
I have a KVM machine ip that i connected it remotely by its ip. where should i put this ip in this tf file?
when i did it manually, i run "virt manager", do i need to write it here anywhere?
thank's a lot.
No. It can be https also.
Do you mean a KVM host that VMs will be created ? Then you need to configure remote kvm access on that host and in the uri section of provider block you need to write its ip.
uri = "qemu+ssh://username#IP_OF_HOST/system"
You dont need virt-manager when you use terraform. You should use terraform resources for managing VM.
https://registry.terraform.io/providers/dmacvicar/libvirt/latest/docs
https://github.com/dmacvicar/terraform-provider-libvirt/tree/main/examples/v0.13
I want to deploy a Windows VM with Azure Cloud Adoption Framework (CAF) using Terraform. In the example of configuration.tfvars, all the configuration is done.But I cannot find the correct terraform code to deploy this tfvars configuration.
The windows vm module is here.
So far, i have written the code below:
module "caf_virtual_machine" {
source = "aztfmod/caf/azurerm//modules/compute/virtual_machine"
version = "5.0.0"
# belows are the 7 required variables
base_tags = var.tags
client_config =
global_settings = var.global_settings
location = var.location
resource_group_name = var.resource_group_name
settings =
vnets = var.vnets
}
So the vnets, global_settings, resource_group_name variables already exists in the configuration.tfvars. I have added tags and location variables to the configuration.tfvars.
But what should i enter to settings and client_config variables?
The virtual machine is a private module. You should use it by calling the base CAF module.
The Readme of the terraform registry explains how to leverage the core CAF module - https://registry.terraform.io/modules/aztfmod/caf/azurerm/latest/submodules/virtual_machine
Source code of an example:
https://github.com/aztfmod/terraform-azurerm-caf/tree/master/examples/compute/virtual_machine/211-vm-bastion-winrm-agents/registry
You have a library of configuration files examples showing how to deploy virtual machines
https://github.com/aztfmod/terraform-azurerm-caf/tree/master/examples/compute/virtual_machine
module "caf" {
source = "aztfmod/caf/azurerm"
version = "5.0.0"
global_settings = var.global_settings
tags = var.tags
resource_groups = var.resource_groups
storage_accounts = var.storage_accounts
keyvaults = var.keyvaults
managed_identities = var.managed_identities
role_mapping = var.role_mapping
diagnostics = {
# Get the diagnostics settings of services to create
diagnostic_log_analytics = var.diagnostic_log_analytics
diagnostic_storage_accounts = var.diagnostic_storage_accounts
}
compute = {
virtual_machines = var.virtual_machines
}
networking = {
vnets = var.vnets
network_security_group_definition = var.network_security_group_definition
public_ip_addresses = var.public_ip_addresses
}
security = {
dynamic_keyvault_secrets = var.dynamic_keyvault_secrets
}
}
Note - it is recommended to leverage the VScode devcontainer provided in the source repository to execute the terraform deployment. The devcontainer includes the tooling required to deploy Azure solutions.
I just want to know How to create multiple Vms using terraform libvirt provider ?
I managed to create one vm , is there a way to do it in one tf file
# libvirt.tf
# add the provider
provider "libvirt" {
uri = "qemu:///system"
}
#create pool
resource "libvirt_pool" "ubuntu" {
name = "ubuntu-pool"
type = "dir"
path = "/libvirt_images/ubuntu_pool/"
}
# create image
resource "libvirt_volume" "image-qcow2" {
name = "ubuntu-amd64.qcow2"
pool = libvirt_pool.ubuntu.name
source ="${path.module}/downloads/bionic-server-cloudimg-amd64.img"
format = "qcow2"
}
I used cloud config to define the user and ssh connection
I want to complete this code so I can create multiple VM
I was reading this blog on setting up an affordable Kubernetes cluster for personal projects, and setup my cluster.
Trouble is, I tend to forget a lot of manual configuration over time, so I decided to store it in declarative code using Terraform.
I've managed to build the following configuration, and apply it:
provider "google" {
credentials = "${file("secret-account.json")}"
project = "worklark-218609"
zone = "us-central1-a"
}
# configuration
resource "google_container_cluster" "primary" {
name = "worklark-cluster"
initial_node_count = 3
node_config {
machine_type = "f1-micro"
disk_size_gb = 10 # Set the initial disk size
preemptible = true
}
addons_config {
kubernetes_dashboard {
disabled = false # Configure the Kubernetes dashboard
}
http_load_balancing {
disabled = false # Configure the Kubernetes dashboard
}
}
}
The problem is, the two clusters are slightly differently configured, here's what I need to add to the configuration:
Stackdriver Logging: is currently Enabled, must be Disabled.
Stackdriver Monitoring: is currently Enabled, must be Disabled.
Automatic node upgrades: is currently Disabled, must be Enabled.
Automatic node repair: is currently Disabled, must be Enabled.
I can't find the configuration options on the documentation for the google_container_cluster resource. What do I do to set these options?
I found the options:
Stackdriver Logging: called logging_service under google_container_cluster
Stackdriver Monitoring: called monitoring_service under google_container_cluster
Automatic node upgrades: called management.auto_upgrade under container_node_pool
Automatic node repair: called management.auto_repair under container_node_pool`
The container_node_pool options aren't applicable to the default pool created with the cluster, unfortunately, so a workaround I found was to delete the default pool, and then add a fully configured node pool to the cluster.
Here's the final config:
/* This configuration sets up a Kubernetes Cluster following
https://www.doxsey.net/blog/kubernetes--the-surprisingly-affordable-platform-for-personal-projects
Confession: there's a minor difference between the article and my config, the
former created a Cluster and configured the default node pool, however the options
for doing this via the API are limited, so my configuration creates an empty
default node pool for the cluster, and the creates and adds a fully configured
one on top
*/
provider "google" {
credentials = "${file("secret-account.json")}"
project = "worklark-218609"
zone = "us-central1-a"
}
# Node pool configuration
resource "google_container_node_pool" "primary_pool" {
name = "worklark-node-pool"
cluster = "${google_container_cluster.primary.name}"
node_count = 3
node_config {
machine_type = "f1-micro"
disk_size_gb = 10 # Set the initial disk size
preemptible = true
}
management {
auto_repair = true
auto_upgrade = true
}
}
# configuration
resource "google_container_cluster" "primary" {
name = "worklark-cluster"
logging_service = "none"
monitoring_service = "none"
addons_config {
kubernetes_dashboard {
disabled = false # Configure the Kubernetes dashboard
}
http_load_balancing {
disabled = false # Configure the Kubernetes dashboard
}
}
remove_default_node_pool = "true"
node_pool {
name = "default-pool"
}
}
resource "google_compute_firewall" "default" {
name = "http-https"
network = "${google_container_cluster.primary.network}"
description = "Enable HTTP and HTTPS access"
direction = "INGRESS"
allow {
protocol = "tcp"
ports = ["80", "443"]
}
}
I'm having a really difficult time trying to deploy a CoreOS virtual machine on vsphere using Terraform.
So far this is the terraform file I'm using:
# Configure the VMware vSphere Provider. ENV Variables set for Username and Passwd.
provider "vsphere" {
vsphere_server = "192.168.105.10"
allow_unverified_ssl = true
}
provider "ignition" {
version = "1.0.0"
}
data "vsphere_datacenter" "dc" {
name = "Datacenter"
}
data "vsphere_datastore" "datastore" {
name = "vol_af01_idvms"
datacenter_id = "${data.vsphere_datacenter.dc.id}"
}
data "vsphere_resource_pool" "pool" {
name = "Cluster_rnd/Resources"
datacenter_id = "${data.vsphere_datacenter.dc.id}"
}
data "vsphere_network" "network" {
name = "VM Network"
datacenter_id = "${data.vsphere_datacenter.dc.id}"
}
data "vsphere_virtual_machine" "template" {
name = "coreos_production"
datacenter_id = "${data.vsphere_datacenter.dc.id}"
}
# Create a folder
resource "vsphere_folder" "TestPath" {
datacenter_id = "${data.vsphere_datacenter.dc.id}"
path = "Test"
type = "vm"
}
#Define ignition data
data "ignition_networkd_unit" "vmnetwork" {
name = "00-ens192.network"
content = <<EOF
[Match]
Name=ens192
[Network]
DNS=8.8.8.8
Address=192.168.105.27/24
Gateway=192.168.105.1
EOF
}
data "ignition_config" "node" {
networkd = [
"${data.ignition_networkd_unit.vmnetwork.id}"
]
}
# Define the VM resource
resource "vsphere_virtual_machine" "vm" {
name = "terraform-test"
folder = "${vsphere_folder.TestPath.path}"
resource_pool_id = "${data.vsphere_resource_pool.pool.id}"
datastore_id = "${data.vsphere_datastore.datastore.id}"
num_cpus = 2
memory = 1024
guest_id = "other26xLinux64Guest"
network_interface {
network_id = "${data.vsphere_network.network.id}"
}
disk {
name = "terraform-test.vmdk"
size = "${data.vsphere_virtual_machine.template.disks.0.size}"
eagerly_scrub = "${data.vsphere_virtual_machine.template.disks.0.eagerly_scrub}"
thin_provisioned = "${data.vsphere_virtual_machine.template.disks.0.thin_provisioned}"
}
clone {
template_uuid = "${data.vsphere_virtual_machine.template.id}"
}
extra_config {
guestinfo.coreos.config.data.encoding = "base64"
guestinfo.coreos.config.data = "${base64encode(data.ignition_config.node.rendered)}"
}
}
I'm using terraform vsphere provier to create the virtual machine and ignition provider to pass customization details of the virtual machine such as network configuration.
It is not quite clear to me if I'm using correctly the extra_config property on the virtual machine definition. You can find documentation about that property here.
Virtual machine gets created, but network settings never are applied, meaning that ignition provisioning is not correctly working.
I would appreciate any guidance on how to properly configure Terraform for this particular scenario (Vsphere environment and CoreOS virtual machine), specially regarding guestinfo configuration.
Terraform v0.11.1, provider.ignition v1.0.0, provider.vsphere v1.1.0
VMware ESXi, 6.5.0, 5310538
CoreOS 1520.0.0
EDIT (2018-03-02)
As of version 1.3.0 of terraform vsphere provider, there is available a new vApp property. Using this property, there is no need to tweak the virtual machine using VMware PowerCLI as I did in the first answer.
There is a complete example of using this property here
The machine definition now would look something like this:
...
clone {
template_uuid = "${data.vsphere_virtual_machine.template.id}"
}
vapp {
properties {
"guestinfo.coreos.config.data.encoding" = "base64"
"guestinfo.coreos.config.data" = "${base64encode(data.ignition_config.node.rendered)}"
}
...
OLD ANSWER
Finally got this working.
The workflow I have used to create a CoreOS machine on vSphere using Terraform is as follows:
Download the latest Container Linux Stable OVA from
https://stable.release.core-os.net/amd64-usr/current/coreos_production_vmware_ova.ova.
Import coreos_production_vmware_ova.ova into vCenter.
Edit machine settings as desired (number of CPUs, disk size etc.)
Disable "vApp Options" of virtual machine.
Convert the virtual machine into a virtual machine template.
Once you have done this, you've got a CoreOS virtual machine template that is ready to be used with Terraform.
As I said in a comment to the question, some days ago I found this and that lead me to understand that my problem could be related to not being able to perform step 4.
The thing is that to be able to disable "vApp Options" (i.e. to see in the UI the "vApp Options" tab of the virtual machine) you need DSR enabled in your vSphere cluster, and, to be able to enable DSR, your hosts must be licensed with a key that supports DRS. Mine weren't, so I was stuck in that 4th step.
I wrote to VMware support, and they told me an alternate way to do this, without having to buy a different license.
This can be done using VMware PowerCLI. Here are the steps to install PowerCLI, and here is the reference. Once you got PowerCLI installed, this is the script I used to disable "vApp Options" in my machines:
Import-Module VMware.PowerCLI
#connect to vcenter
Connect-VIServer -Server yourvCenter -User yourUser -Password yourPassword
#Use this to disable the vApp functionality.
$disablespec = New-Object VMware.Vim.VirtualMachineConfigSpec
$disablespec.vAppConfigRemoved = $True
#Use this to enable
$enablespec = New-Object VMware.Vim.VirtualMachineConfigSpec
$enablespec.vAppConfig = New-Object VMware.Vim.VmConfigSpec
#Get the VM you want to work against.
$VM = Get-VM yourTemplate | Get-View
#Disables vApp Options
$VM.ReconfigVM($disablespec)
#Enables vApp Options
$VM.ReconfigVM($enablespec)
I executed that on a Powershell and managed to reconfigure the virtual machine, performing that 4th step. With this, I finally got my CoreOS virtual machine template correctly configured for this scenario.
I've tested this with terraform vSphere provider versions v0.4.2 and v1.1.0 (the syntax changes) and the machine gets created correctly; Ignition provisioning works and everything you put on your ignition file (network configs, users etc.) is applied on the newly created machine.