Azure. How to deploy custom file referenced by DSC - azure

I'm building an Azure RM Template that will install DSC on a target VM. DSC must use a .bacpac file. How can I upload that file to a target VM? How can I make it to be downloaded by target VM from GitHub and placed on a specific folder?
The DSC configuration looks like this:
https://github.com/PowerShell/xDatabase/blob/dev/Examples/Sample_xDatabase.ps1

Something like this:
Import-DscResource -ModuleName PSDesiredStateConfiguration,xPSDesiredStateConfiguration,xDatabase
Node $nodeName
{
LocalConfigurationManager
{
RebootNodeIfNeeded = $true
}
xRemoteFile BacPacPackage
{
Uri = "https://github.com/url_to_your_bacpac"
DestinationPath = "c:\where_to_put_it"
MatchSource = $false
}
xDatabase DeployBacPac
{
Ensure = "Present"
SqlServer = $nodeName
SqlServerVersion = $SqlServerVersion
DatabaseName = $DatabaseName
Credentials = $credential # credentials to access SQL
BacPacPath = "c:\path_from_previous command"
DependsOn = "[xRemoteFile]BacPacPackage"
}
}

Related

Custom Script extension not Executing on VMSS

I am creating a VMSS using terraform to use for Azure Devops agent pool. I'm able to create VMSS successfully but when I try to run script to enroll it to agent pool, I'm hitting a wall. Nothing seems to work. Here is my TF code:
data "local_file" "template" {
filename = "./agent_install_script.ps1"
}
data "template_file" "script" {
template = data.local_file.template.content
vars = {
agent_name = var.agent_name
pool_name = var.agent_pool_name
token = var.pat_token
user_name = var.vmss_admin_username
logon_password = random_password.vm_password.result
}
}
module "vmss_windows2022g2" {
source = "../modules/vmss_windows"
environment = var.environment
resource_group_name = var.resource_group
vmss_sku = "Standard_DS2_v2"
vmss_nic_subnet_id = module.vnet_mgt.subnet_windows_vmss_id
vmss_nsg_id = module.nsg.vmss_nsg_id
vmss_computer_name = "win2022g2"
vmss_admin_username = var.vmss_admin_username
vmss_admin_password = random_password.vm_password.result
windows_image_id = data.azurerm_image.windows_server2022_gen2.id
vmss_storage_uri = data.azurerm_storage_account.vm_storage.primary_blob_endpoint
overprovision = false
#this will be stored at %SYSTEMDRIVE%\AzureData\CustomData.bin
customData = data.template_file.script.rendered
tags = local.env_tags_map
}
resource "azurerm_virtual_machine_scale_set_extension" "ext" {
name = "InstallDevOpsAgent"
virtual_machine_scale_set_id = module.vmss_windows2022g2.id
publisher = "Microsoft.Azure.Extensions"
type = "CustomScript"
type_handler_version = "2.0"
settings = jsonencode({
"commandToExecute" = "dir C:\\ > C:\\temp\\test.txt"
#"cd C:\\AzureData; mv .\\CustomData.bin .\\install_agent.ps1; powershell -ExecutionPolicy Unrestricted -File .\\install_agent.ps1; del .\\install_agent.ps1;"
})
#protected_settings = var.protected_settings
failure_suppression_enabled = false
auto_upgrade_minor_version = false
automatic_upgrade_enabled = false
provision_after_extensions = []
timeouts {
create = "1h"
}
}
As you can see, I'm copying the powershell script via custom_data and that is working fine with all the variables substituted properly. I have tried executing simple command dir C:\\ > C:\\temp\\test.txt to see if anything works, but am not getting any output.
TF version 1.12, azurerm provider version 3.32.0
Azure DevOps should install an extension on the scale set (and in turn the VM's) which will automatically enrol the agent without the need for a script.
More details here:
https://learn.microsoft.com/en-us/azure/devops/pipelines/agents/scale-set-agents?view=azure-devops#lifecycle-of-a-scale-set-agent

Azure DSC upload a file from storage to VM?

Im using the below code in a dsc but it never compiles? I always get the message "Stop: The term 'xRemoteFile' is not recognized as the name of a cmdlet, function, script file, or operable program."
I need to copy a file from an azure storage account to a vm via azure DSC.
configuration A_FILE
{
Node localhost
{
File SetupDir {
Type = 'Directory'
DestinationPath = 'C:\files\'
Ensure = "Present"
}
xRemoteFile Afile {
Uri = "https://storageaccountname.file.core.windows.net/files/file1.txt"
DestinationPath = "C:\files\"
DependsOn = "[File]SetupDir"
MatchSource = $false
}
}
}
Ok i worked it out
ok i worked it out
configuration getfilefromazurestorage
{
Import-DscResource -ModuleName xPSDesiredStateConfiguration
Node localhost
{
File SetupDir {
Type = 'Directory'
DestinationPath = 'C:\localfiles'
Ensure = "Present"
}
xRemoteFile remotefile {
# for uri generate a sas token then copy the sas token url in the uri line below
Uri = "https://storageaccountname.blob.core.windows.net/files/sastokendetails"
DestinationPath = "C:\localfiles\AzureFile.txt"
DependsOn = "[File]SetupDir"
MatchSource = $false
}
}

How to create multiple Vms using terraform libvirt provider?

I just want to know How to create multiple Vms using terraform libvirt provider ?
I managed to create one vm , is there a way to do it in one tf file
# libvirt.tf
# add the provider
provider "libvirt" {
uri = "qemu:///system"
}
#create pool
resource "libvirt_pool" "ubuntu" {
name = "ubuntu-pool"
type = "dir"
path = "/libvirt_images/ubuntu_pool/"
}
# create image
resource "libvirt_volume" "image-qcow2" {
name = "ubuntu-amd64.qcow2"
pool = libvirt_pool.ubuntu.name
source ="${path.module}/downloads/bionic-server-cloudimg-amd64.img"
format = "qcow2"
}
I used cloud config to define the user and ssh connection
I want to complete this code so I can create multiple VM

Trouble Accessing Newly attached disk in DSC on an azure VM

I have an issue with a DSC config i'm trying to use to install and run a mongo service on an Azure VM.
When the DSC runs on the initial deployment of the VM, the secondary disk 'F' is attached and formatted successfully, however... i receive an error when trying to create directories on the new disk:
Error message: \"DSC Configuration 'Main' completed with error(s).
Cannot find drive. A drive with the name 'F' does not exist.
The PowerShell DSC resource '[Script]SetUpDataDisk' with SourceInfo 'C:\\Packages\\Plugins\\Microsoft.Powershell.DSC\\2.73.0.0\\DSCWork\\MongoDSC.0\\MongoDSC.ps1::51::2::Script' threw one or more non-terminating errors while running the Set-TargetResource functionality.
Here is my DSC script :
Configuration Main
{
Param ( [string] $nodeName )
Import-DscResource -ModuleName PSDesiredStateConfiguration
Import-DscResource -ModuleName xStorage
Node $nodeName
{
xWaitforDisk Disk2
{
DiskId = 2
RetryIntervalSec = 60
RetryCount = 60
}
xDisk FVolume
{
DiskId = 2
DriveLetter = 'F'
FSLabel = 'MongoData'
DependsOn = "[xWaitforDisk]Disk2"
}
Script SetUpDataDisk{
TestScript ={
return Test-Path "f:\mongoData\"
}
SetScript ={
#set up the directories for mongo
$retries = 0
Do{
$mountedDrive = Get-Volume | Where DriveLetter -eq 'F'
if($mountedDrive -eq $null)
{
Start-Sleep -Seconds 60
$retries = $retries + 1
}
}While(($mountedDrive -eq $null) -and ($retries -lt 60))
$dirName = "mongoData"
$dbDirName = "db"
$logDirName = "logs"
##! ERROR THROWN FROM THESE LINES
New-Item -Path "F:\$dirName" -ItemType Directory
New-Item -Path "F:\$dirName\$dbDirName" -ItemType Directory
New-Item -Path "F:\$dirName\$logDirName" -ItemType Directory
}
GetScript = {#{Result = "SetUpDataDisk"}}
DependsOn = "[xDisk]FVolume"
}
}
}
The annoying thing is that if i run the deployment again everything works with no errors, i have put a loop in to try and wait for the disk to be ready but this still throws the error. I'm very new to DSC so any pointers would be helpful.
It seems xDiskAccessPath can be used for that:
<#
.EXAMPLE
This configuration will wait for disk 2 to become available, and then make the disk available as
two new formatted volumes mounted to folders c:\SQLData and c:\SQLLog, with c:\SQLLog using all
available space after c:\SQLData has been created.
#>
Configuration Example
{
Import-DSCResource -ModuleName xStorage
Node localhost
{
xWaitforDisk Disk2
{
DiskId = 2
RetryIntervalSec = 60
RetryCount = 60
}
xDiskAccessPath DataVolume
{
DiskId = 2
AccessPath = 'c:\SQLData'
Size = 10GB
FSLabel = 'SQLData1'
DependsOn = '[xWaitForDisk]Disk2'
}
xDiskAccessPath LogVolume
{
DiskId = 2
AccessPath = 'c:\SQLLog'
FSLabel = 'SQLLog1'
DependsOn = '[xDiskAccessPath]DataVolume'
}
}
}
https://github.com/PowerShell/xStorage/blob/dev/Modules/xStorage/Examples/Resources/xDiskAccessPath/1-xDiskAccessPath_InitializeDataDiskWithAccessPath.ps1

Deploy CoreOS virtual machine on vSphere with Terraform

I'm having a really difficult time trying to deploy a CoreOS virtual machine on vsphere using Terraform.
So far this is the terraform file I'm using:
# Configure the VMware vSphere Provider. ENV Variables set for Username and Passwd.
provider "vsphere" {
vsphere_server = "192.168.105.10"
allow_unverified_ssl = true
}
provider "ignition" {
version = "1.0.0"
}
data "vsphere_datacenter" "dc" {
name = "Datacenter"
}
data "vsphere_datastore" "datastore" {
name = "vol_af01_idvms"
datacenter_id = "${data.vsphere_datacenter.dc.id}"
}
data "vsphere_resource_pool" "pool" {
name = "Cluster_rnd/Resources"
datacenter_id = "${data.vsphere_datacenter.dc.id}"
}
data "vsphere_network" "network" {
name = "VM Network"
datacenter_id = "${data.vsphere_datacenter.dc.id}"
}
data "vsphere_virtual_machine" "template" {
name = "coreos_production"
datacenter_id = "${data.vsphere_datacenter.dc.id}"
}
# Create a folder
resource "vsphere_folder" "TestPath" {
datacenter_id = "${data.vsphere_datacenter.dc.id}"
path = "Test"
type = "vm"
}
#Define ignition data
data "ignition_networkd_unit" "vmnetwork" {
name = "00-ens192.network"
content = <<EOF
[Match]
Name=ens192
[Network]
DNS=8.8.8.8
Address=192.168.105.27/24
Gateway=192.168.105.1
EOF
}
data "ignition_config" "node" {
networkd = [
"${data.ignition_networkd_unit.vmnetwork.id}"
]
}
# Define the VM resource
resource "vsphere_virtual_machine" "vm" {
name = "terraform-test"
folder = "${vsphere_folder.TestPath.path}"
resource_pool_id = "${data.vsphere_resource_pool.pool.id}"
datastore_id = "${data.vsphere_datastore.datastore.id}"
num_cpus = 2
memory = 1024
guest_id = "other26xLinux64Guest"
network_interface {
network_id = "${data.vsphere_network.network.id}"
}
disk {
name = "terraform-test.vmdk"
size = "${data.vsphere_virtual_machine.template.disks.0.size}"
eagerly_scrub = "${data.vsphere_virtual_machine.template.disks.0.eagerly_scrub}"
thin_provisioned = "${data.vsphere_virtual_machine.template.disks.0.thin_provisioned}"
}
clone {
template_uuid = "${data.vsphere_virtual_machine.template.id}"
}
extra_config {
guestinfo.coreos.config.data.encoding = "base64"
guestinfo.coreos.config.data = "${base64encode(data.ignition_config.node.rendered)}"
}
}
I'm using terraform vsphere provier to create the virtual machine and ignition provider to pass customization details of the virtual machine such as network configuration.
It is not quite clear to me if I'm using correctly the extra_config property on the virtual machine definition. You can find documentation about that property here.
Virtual machine gets created, but network settings never are applied, meaning that ignition provisioning is not correctly working.
I would appreciate any guidance on how to properly configure Terraform for this particular scenario (Vsphere environment and CoreOS virtual machine), specially regarding guestinfo configuration.
Terraform v0.11.1, provider.ignition v1.0.0, provider.vsphere v1.1.0
VMware ESXi, 6.5.0, 5310538
CoreOS 1520.0.0
EDIT (2018-03-02)
As of version 1.3.0 of terraform vsphere provider, there is available a new vApp property. Using this property, there is no need to tweak the virtual machine using VMware PowerCLI as I did in the first answer.
There is a complete example of using this property here
The machine definition now would look something like this:
...
clone {
template_uuid = "${data.vsphere_virtual_machine.template.id}"
}
vapp {
properties {
"guestinfo.coreos.config.data.encoding" = "base64"
"guestinfo.coreos.config.data" = "${base64encode(data.ignition_config.node.rendered)}"
}
...
OLD ANSWER
Finally got this working.
The workflow I have used to create a CoreOS machine on vSphere using Terraform is as follows:
Download the latest Container Linux Stable OVA from
https://stable.release.core-os.net/amd64-usr/current/coreos_production_vmware_ova.ova.
Import coreos_production_vmware_ova.ova into vCenter.
Edit machine settings as desired (number of CPUs, disk size etc.)
Disable "vApp Options" of virtual machine.
Convert the virtual machine into a virtual machine template.
Once you have done this, you've got a CoreOS virtual machine template that is ready to be used with Terraform.
As I said in a comment to the question, some days ago I found this and that lead me to understand that my problem could be related to not being able to perform step 4.
The thing is that to be able to disable "vApp Options" (i.e. to see in the UI the "vApp Options" tab of the virtual machine) you need DSR enabled in your vSphere cluster, and, to be able to enable DSR, your hosts must be licensed with a key that supports DRS. Mine weren't, so I was stuck in that 4th step.
I wrote to VMware support, and they told me an alternate way to do this, without having to buy a different license.
This can be done using VMware PowerCLI. Here are the steps to install PowerCLI, and here is the reference. Once you got PowerCLI installed, this is the script I used to disable "vApp Options" in my machines:
Import-Module VMware.PowerCLI
#connect to vcenter
Connect-VIServer -Server yourvCenter -User yourUser -Password yourPassword
#Use this to disable the vApp functionality.
$disablespec = New-Object VMware.Vim.VirtualMachineConfigSpec
$disablespec.vAppConfigRemoved = $True
#Use this to enable
$enablespec = New-Object VMware.Vim.VirtualMachineConfigSpec
$enablespec.vAppConfig = New-Object VMware.Vim.VmConfigSpec
#Get the VM you want to work against.
$VM = Get-VM yourTemplate | Get-View
#Disables vApp Options
$VM.ReconfigVM($disablespec)
#Enables vApp Options
$VM.ReconfigVM($enablespec)
I executed that on a Powershell and managed to reconfigure the virtual machine, performing that 4th step. With this, I finally got my CoreOS virtual machine template correctly configured for this scenario.
I've tested this with terraform vSphere provider versions v0.4.2 and v1.1.0 (the syntax changes) and the machine gets created correctly; Ignition provisioning works and everything you put on your ignition file (network configs, users etc.) is applied on the newly created machine.

Resources