Passing customdata to Operating system option of azure vmss - Terraform - azure

While we create Virtual machine scale set in azure , there is an option for passing the Custom data under Operating System like below
How can i pass the script there using terraform , there is an option custom data which seems to be used for newly created machines from terraform, but the script is not getting stored there. How do i fill this with the script i have using terraform. Any help on this would be appreciated.

From the official document, the custom_data can only be passed to the Azure VM at provisioning time.
Custom data is only made available to the VM during first boot/initial
setup, we call this 'provisioning'. Provisioning is the process where
VM Create parameters (for example, hostname, username, password,
certificates, custom data, keys etc.) are made available to the VM and
a provisioning agent processes them, such as the Linux Agent and
cloud-init.
The scripts are saved differed from the OS.
Windows
Custom data is placed in %SYSTEMDRIVE%\AzureData\CustomData.bin as a binary file, but it is not processed.
Linux
Custom data is passed to the VM via the ovf-env.xml file, which is copied to the /var/lib/waagent directory during provisioning. Newer versions of the Microsoft Azure Linux Agent will also copy the base64-encoded data to /var/lib/waagent/CustomData as well for convenience.
To upload custom_data from your local path to your Azure VM with terraform, you can use filebase64 Function.
For example, there is a test.sh script or cloud-init.txt file under the path where your main.tfor terraform.exe file exists.
custom_data = filebase64("${path.module}/test.sh")
If you are looking for executing scripts after VMSS created, you could look at the custom extension and this sample.

Related

How To Capture information about Virtual Machine resources that will be destroyed?

Background
I was kind of dropped into an IaC project that uses Packer => Terraform => Ansible to create RHEL Virtual Machines on an on-prem VMware Vsphere cluster.
Our vmware module registers output variables that we use once the VMs are created, those variables feed a local_file resource template to build an Ansible inventory with the vm names and some other variables.
Ansible is then run using local_exec with the above created inventory to do configuration actions and run scripts both on the newly deployed VM's and against some external management applications, for example to join the VM to a domain (FreeIPA, sadly no TF good provider is available).
Issue Description
The issue that I have been wrestling with is when we run a terraform destroy (or apply with some VM count changes that destroy a VM resource), we would like to be able to repeat the process in reverse.
Capture the names of the VMs to be destroyed(Output vars from resource creation) so they can be removed from the IPA domain and have some general cleanup.
We've tried different approaches with Destroy Time Provisioners and it just seems like it would require a fundamental change in the approach outlined above to make that work.
Question
I'm wondering if there is a way to get an output variable on destroy that could be used to populate a list the VMs that would be removed.
So far my search has turned up nothing. Thanks for your time.
In general, it is good to plan first, even when destroying:
terraform plan -destroy -out tfplan
Then, you you can proceed with the destroy:
terraform apply tfplan
But at this moment (or before actual destroy), you have a plan what was destroyed, and you can do any analysis or automation on it. Example:
terraform show -json tfplan | jq > tfplan.json
Source:
https://learn.hashicorp.com/tutorials/terraform/plan

Azure ARM template executing two custom script extension on same VM

How do i deploy two custom script extension on same VM using ARM template, for example one script i am installing Domain controller on the VM upon reboot i want to execute one more script that creates domain user accounts. its throwing me error right now if i have two extension depends on same VM
"Multiple VMExtensions per handler not supported for OS type 'Windows'. VMExtension 'PrimaryDCVMPostDeploy' with handler 'Microsoft.Compute.CustomScriptExtension' already added or specified in input"
Thanks for any help!
Sri

Is it possible to update the assigned Azure DSC configuration to a VM via ARM Template?

I need to change the Azure DSC configuration that has been previously assigned to a VM.
I'm trying to do this programatically because it's part of an automation I'm developing and because of this, I'm using ARM Templates.
However, redeploying the same VM DSC extension by ARM Template results in an error stating a VM can't have two of the same extensions, which sounds logical.
What I want to know if it's possible to, by ARM Template, "update" or "modify" the current extension with just one setting changed: The configuration name.
Is this possible?
Sure - you can update the existing VM extension by providing new configuration in your ARM template. As you have found out, you cannot use a different name for the extension - that would result in two VM extensions of the same type on the VM. Instead, you need to reuse the same name of the existing VM extension when performing the update.

Securing credentials in Desired State Configuration deployed via ARM

How to use Desired State Configuration in combinition with ARM.
Scope:
- We have an Azure virtual machine that is deployed via an ARM template.
- The VM has an extension resource in the ARM template, for the Desired State Configuration
- We need to pass sensitive parameters (in a secure way!) into the Desired State Configuration (we want to create an additional local windows account with the DSC)
- Configuration file is used to know what public key to use for encryption, and to let the VM know which certificate it has to use for decryption (by thumbprint)
- When using ARM, you need to define the configuration data file in a separate property
- I noticed that the DSC service, automically adds an certificate for document encryption to the VM.
Question:
If I want to get this working out of the box, I will need to create the configurationDataFile upfront, and store it somewhere (like blob or something).
However, the 'out-of-the-box' certificate on the VM is only known after the ARM template has been deployed.
I was wondering if there is a way to get the encryption/decryption in DSC working, using the out of the box DSC Certificate on the VM, without using different incremental DSC templates.
So how can I know the out of the box certificate thumbprint at deployment time? (In the arm template?)
Do I actually need to transform the ConfigurationData file for every deployment (and finding the correct thumbprint of the VM), or is there an out of the box way to tell DSC via ARM to use the out of the box created certificate for this?
Because the target VM is also the authoring machine, the passwords can be passed as plain text, as they never leave the Virtual Machine.
This has been verified by Microsoft support.

azure vm location default using command line?

I'm trying to use the azure command line to start a vm:
azure vm start myvmnamehere
But it's telling me:
No deployments were found
I'm guessing that I need to specify the location "West US"?
azure vm start is going to start a virtual machine that you've already created, within a specific region. To do that, you'd first need to call azure vm create. You would first create your vm from an image in the gallery (and within a dns name, xxxxx.cloudapp.net). To see the images available to you, try running azure vm image list.
Also: don't forget to add --ssh or --rdp so you can have remote access, when calling azure vm create.
Jeff Wilcox blogged about this in more detail, here.

Resources