Context:
Trying to create VMs in Azure (cloud) using VMSS in ARM mode and using json templates.
Problem:
Creating a VMSS from an os image and datadisk image using Azure CLI and json template creates new VMSS but not the datadisk.
My success so far:
Successfully created VMs using CLI with both os and data disk from a custom image using json template. Also successfully created VMSS (Virtual Machine Scale Set) from a valid custom os image using CLI with json template.
My research for problem:
There isn't any sample on github for this scenario git hub templates. Microsoft azure site also have sample for os disk only and not creating a VMSS with data disk.
blkid comand doesn't show the data disk at all, meaning it was never created and mounted. My json template virtualMachineProfile->storageProfile declares a valid dataDisks object and I know it's works as it successfully creates VM (not VMSS) with data disk and also CLI doesn't return any error.
I know json based template is new and Microsoft is working on adding more features/functionalities so my questions is "Is there anything wrong that I am doing and is it that creating VMSS with data disk is not yet implemented in Azure ?"
Environment: Linux (Debian/RHEL)
Azure CLI : 0.9.13 (ARM mode)
Azure Api: 2015-06-15
Image: (CentOS 6.7)
Thanks for your help.
As per this blog post VMSS and data disks, it is not yet supported. Such a bummer..... Hopefully Microsoft will soon release this feature before selling VMSS too much.
Related
My Azure VMSS is deployed successfully and operating as expected. It uses an Azure Compute Gallery reference image (which includes both an OS disk, and 1x Data Disk).
We are now changing all our Compute Gallery reference images to single OS disk-only (since the Terraform Windows VM resource does not support deploying from an image version that has a data disk included, even though Terraform Windows VMSS resource does support it), but when changing the VMSS from dual-disk image, to the single-disk image, Terraform fails with:
Code="PropertyChangeNotAllowed" Message="Changing property 'dataDisks' is not allowed." Target="dataDisks"
NOTE: I have successfully tested the change from the other way around: ie. Updating a VMSS that runs from a single disk image, to pointing to an image that uses 2 disks. This completed fine. It only fails when trying to go from 2 disks down to 1 disk.
(Using: Terraform v1.0.0; AzureRm Provider v3.0.2)
SOLUTION/WORKAROUND:
So, doing it via Terraform fails, however, with a little help from the portal, we can achieve the desired result. In the portal, on the VMSS, under Settings, select Disks. Detach the data disk from the VMSS. Takes a few seconds. Once that is detached, now run the Terraform again to update the image reference to an image with single disk. This time it completes successfully.
Hope this helps someone out there
While we create Virtual machine scale set in azure , there is an option for passing the Custom data under Operating System like below
How can i pass the script there using terraform , there is an option custom data which seems to be used for newly created machines from terraform, but the script is not getting stored there. How do i fill this with the script i have using terraform. Any help on this would be appreciated.
From the official document, the custom_data can only be passed to the Azure VM at provisioning time.
Custom data is only made available to the VM during first boot/initial
setup, we call this 'provisioning'. Provisioning is the process where
VM Create parameters (for example, hostname, username, password,
certificates, custom data, keys etc.) are made available to the VM and
a provisioning agent processes them, such as the Linux Agent and
cloud-init.
The scripts are saved differed from the OS.
Windows
Custom data is placed in %SYSTEMDRIVE%\AzureData\CustomData.bin as a binary file, but it is not processed.
Linux
Custom data is passed to the VM via the ovf-env.xml file, which is copied to the /var/lib/waagent directory during provisioning. Newer versions of the Microsoft Azure Linux Agent will also copy the base64-encoded data to /var/lib/waagent/CustomData as well for convenience.
To upload custom_data from your local path to your Azure VM with terraform, you can use filebase64 Function.
For example, there is a test.sh script or cloud-init.txt file under the path where your main.tfor terraform.exe file exists.
custom_data = filebase64("${path.module}/test.sh")
If you are looking for executing scripts after VMSS created, you could look at the custom extension and this sample.
I am using create_option as from_image to create new vm and passing the old vhd url in it but it is not successfully provisioned.
What steps i need to follow to make it work ?
My vm is in azure from whose os disk i want to create new vms.
You will want to sysprep and generalize your VM and capture an image to use to deploy new VMs.
Follow the documentation located here: https://learn.microsoft.com/en-us/azure/virtual-machines/windows/capture-image-resource
I just started using azure virtual machines and I must admit I still have a few questions regarding the disk management:
I manage my machines via the Node JS API in the following way:
azure vm create INSTANCE b39f27a8b8c64d52b05eac6a62ebad85__Ubuntu-12_10-amd64-server- 20130227-en-us-30GB azureuser XXXXXX --ssh --location "West US" -t ./azure.pem
azure vm start INSTANCE
//do whatever
azure vm shutdown INSTANCE
azure vm delete INSTANCE
After deleting the instance I still have a buch of disks left, which are not deleted but which I am charged (i.e. deducted from my free trial). Are they not deleted by default?
Is there an API call to delete them (only found the corresponding REST calls, but kind of unwilling to mix NODE JS and Rest api calls).
Can I specify one of those existing disks when starting a new instance?
Thanks for your answers!
Jörg
After deleting the instance I still have a buch of disks left, which are not deleted but which I am charged (i.e. deducted from my free trial). Are they not deleted by default? Is there an API call to delete them (only found the corresponding REST calls, but kind of unwilling to mix NODE JS and Rest api calls).
Yes, the disks are not deleted by default. I believe the reason for that is to reuse those disks to spin off new VMs. To delete the disk (which is a page blob stored in Windows Azure Blob Storage) you could possibly use Azure SDK for Node: https://github.com/WindowsAzure/azure-sdk-for-node.
Can I specify one of those existing disks when starting a new
instance?
Yes, you can. For that you would need to find the disk image and then use the following command:
azure vm create myVM myImage myusername --location "West US"
Where "myImage" is the name of the image. For more details, please visit: http://www.windowsazure.com/en-us/develop/nodejs/how-to-guides/command-line-tools/#VMs
Yes when a VM is deleted the disk is left behind. Within the portal you can apply this disk image to a new VM instance on creation. There's some specific guidance on creating VMs from the API with existing disk images here:
http://www.windowsazure.com/en-us/develop/nodejs/how-to-guides/command-line-tools/#VMs
I'm trying to move a VM custom image from one DevTest Lab to another and can't seem to find an easy way to accomplish that. My VM is using managed disks and also has a data disk.
I've read the following article https://azure.microsoft.com/en-us/updates/azure-devtest-labs-changes-in-exporting-custom-image-vhd-files/ and it states that
Azure DevTest Lab now generates a managed image and "…This allows
Sysprep'ed/deprovisioned custom images to support data disks in
addition to the OS disk through a single image."
This is fine but the image that is created can't be exported.
Is it even possible to accomplish, am I missing something?
Thanks for your help
This is fine but the image that is created can't be exported.
The article you posted is right, you can follow it to export the VM OS disk(not image) to your local machine. You should export the VM OS disk from the Resource group which contains your Devtest VM. The main steps are below:
Generate your VM
Go to Azure Portal > find the resource group which name contains your DevtestLab VM :
Then, you can find the Disk and export it to your local machine:
Go to your another Devtest Lab > Configuration and plocies > Custom images > Add > Enter your VHD location and choose the OS type > OK > the coustom image will be uploaded Then you can use it to create your Devtest VM.