Nutanix VM migration to Azure - VM creation timed out - azure

In the process of Nuntanix Virtual Machine migration to Azure, I had copied ral-rdmbuild-02 copy.ova file into a windows machine and extracted to get .vmdk and .mf. From the extracted file the .vhd file was created, later it was resized on a ubuntu to meet 1MB requirement.
Subsequently, the .vhd was copied to azure and attempted to create a vm to face the following error. Could someone can help me to overcome this issue.
{"code":"DeploymentFailed","message":"At least one resource deployment operation failed. Please list deployment operations for details. Please see https://aka.ms/DeployOperations for usage details.","details":[{"code":"OSProvisioningTimedOut","message":"OS Provisioning for VM '' did not finish in the allotted time. The VM may still finish provisioning successfully. Please check provisioning state later. Also, make sure the image has been properly prepared (generalized).\r\n * Instructions for Windows: https://azure.microsoft.com/documentation/articles/virtual-machines-windows-upload-image/ \r\n * Instructions for Linux: https://azure.microsoft.com/documentation/articles/virtual-machines-linux-capture-image/ \r\n * If you are deploying more than 20 Virtual Machines concurrently, consider moving your custom image to shared image gallery. Please refer to https://aka.ms/movetosig for the same."}]}

• Please check whether the Nutanix VM hard disk is configured as a dynamic disk or not because dynamic disks don’t work with Azure, and they can’t be migrated. Also, the image of the Nutanix VM may not be prepared correctly, thus would suggest you recreate the image of the Nutanix VM and try migrating it to Azure once again.
• To convert the dynamic disk to fixed type of virtual disk, kindly refer to the below documentation link that guides to use the appropriate command for converting the same: -
Convert-VHD -Path c:\test\child1vhdx.vhdx -DestinationPath c:\test\child1vhd.vhd -VHDType Fixed
https://learn.microsoft.com/en-us/powershell/module/hyper-v/convert-vhd?view=windowsserver2019-ps
• Also, based on the error message that you are encountering, the OS deployment might have also failed from the Portal side as it was unable to pass on some of the required parameters which is why you got ‘TimeOut’ message. The VM also didn’t finish the deployment correctly. I would recommend trying stop(deallocate) and start the VM and see if that resolves the issue.
I would recommend you delete the VM and its related resources if created and take a snapshot of the OS disk, create a disk from the snapshot and then create the machine from that disk. Please refer to the link below for creating a VM from a ‘VHD’ by using the Azure portal: -
https://learn.microsoft.com/en-us/azure/virtual-machines/linux/create-upload-centos
• Finally, to generalize the VM and execute ‘sysprep’ on a VM to take its correct reference image such that it can be successfully migrated to another environment, please refer to the below link that explains the correct steps to perform for generalizing the VM: -
https://portal.nutanix.com/page/documents/details?targetId=Web-Console-Guide-Prism-v6_1:wc-windows-vm-customize-with-sysprep-clone-vm-wc-t.html

Related

Unable to modify Azure VMSS reference image to an image without data disks

My Azure VMSS is deployed successfully and operating as expected. It uses an Azure Compute Gallery reference image (which includes both an OS disk, and 1x Data Disk).
We are now changing all our Compute Gallery reference images to single OS disk-only (since the Terraform Windows VM resource does not support deploying from an image version that has a data disk included, even though Terraform Windows VMSS resource does support it), but when changing the VMSS from dual-disk image, to the single-disk image, Terraform fails with:
Code="PropertyChangeNotAllowed" Message="Changing property 'dataDisks' is not allowed." Target="dataDisks"
NOTE: I have successfully tested the change from the other way around: ie. Updating a VMSS that runs from a single disk image, to pointing to an image that uses 2 disks. This completed fine. It only fails when trying to go from 2 disks down to 1 disk.
(Using: Terraform v1.0.0; AzureRm Provider v3.0.2)
SOLUTION/WORKAROUND:
So, doing it via Terraform fails, however, with a little help from the portal, we can achieve the desired result. In the portal, on the VMSS, under Settings, select Disks. Detach the data disk from the VMSS. Takes a few seconds. Once that is detached, now run the Terraform again to update the image reference to an image with single disk. This time it completes successfully.
Hope this helps someone out there

Azure pipeline 'WinRMCustomScriptExtension' underlying connection was closed in non-public VM

In Azure pipeline when creating a VM through deployment template, we have the option to 'Configure with WinRM agent' as given below.
This acts as a custom extension behind the scenes. But the downloading of this custom extension can be blocked by an internal vnet in Azure. This is the error we are getting.
<datetime> Adding extension 'WinRMCustomScriptExtension' on virtual machine <vmname>
<datetime> Failed to add the extension to the vm: <vmname>. Error: "VM has reported a failure when processing extension 'WinRMCustomScriptExtension'. Error message: \"Failed to download all specified files. Exiting. Error Message: The underlying connection was closed: An unexpected error occurred on a send.\"\r\n\r\nMore information on troubleshooting is available at https://aka.ms/VMExtensionCSEWindowsTroubleshoot "
Since the files cannot be downloaded, I am thinking of a couple of solutions:
How can I know which powershell files azure is using to setup winrm?
Location to store files would be storage account (same vnet as VM)
Perhaps not use WinRM at all and use custom script extension to resolve
everything (with all files from storage account). I hope error from extension stops the pipeline if it happens.
Is there a better solution to resolve this? To me it looks like a bad design by azure as it is not covering non-public VMs.
EDIT:
Found answer to #1) https://aka.ms/vstsconfigurewinrm. This was shown in Raw logs of the pipeline when diagnostics were enabled
Even if you know - how does it help you? It won't be able to download them anyway and you cant really tell it to use local files
If you enable service endpoins and allow your subnet to talk to the storage account - it should work
there is a way to configure WinRM when you create the VM. Keyvault example
You could use script extension like you wanted to as well, but script extension has to download stuff to the Vm as well. Example

Azure Created Image Without Generalizing

So I made the mistake of trying to capture an image of my VM without first running sysprep /generalize on it. Now I have a VM I can't start and an image I can't create a VM from.
Is there any way I can restore my original VM so I can create a valid image from it?
I saw this blog post https://learn.microsoft.com/en-us/archive/blogs/shwetanayak/captured-the-virtual-machine-didnt-intend-to-generalize-it-now-what that seems to imply that I can, but it's solution says to create a copy of a VHD using a snapshot. When I try to create the snapshot, nothing shows up in the Source Disk managed disks drop down.

AzureRM - Ubuntu VM created using uploaded VHD stuck waiting on boot prompt

I have an Ubuntu Server 17.10 VM in vCenter that I exported using Export-VApp, then used Microsoft's Virtual Machine Converter to create a VHD. Created an Azure RM Disk Image from that and spun up a VM, all of which seemed to go fine. My problem is that the New-AzureRMVM gets stuck in creating, and when I go to the boot diagnostics from the screenshot it is stuck on a "Please unlock disk sda5_crypt:". First, from what I gather is there really no way to get console access to my VM so that I can enter this? It won't get past creating so connect is greyed out and I can't SSH into it. Is my only option here to go back to the VM on vCenter and migrate the entire partition to a new partition without disk encryption, then redeploy? Is there any sort of startup file Azure accepts that could input this for me?

Azure Resource Manager: move VM to availability group

Can't seem to figure out how to change the availability set of an existing Azure VM in the Resource Manager stack. There's no interface for it. Set-AzureAvailabilitySet does not exist in the Azure Powershell tools when in ResourceManager mode. It does exist in service stack mode. But that doesn't help me.
AFAIK, this feature may be addressed by the end of this year. It's a big challenge for the MS team to allow such operation. Changing the availability Set requires a review of the VM mobility architecture on Azure. Fore example, adding a VM in an Availability Set already containing a VM means putting it to different default domain. Becasue VM mobilty is a matter on Azure (No Live Migration), it's not an easy operation.
I have written a Powershell script which let you change the AS of an ARM VM by recreating it.
Give it a try and enjoy:
How to use it ?
1- Download the script and save it to local location
2- Run it and provide the requested parameters
or
2- ./Set-ArmVmAvailabilitySet.ps1 –VmName ‘The VM Name’ –ResourceGroup
‘Resource Group’ –AvailabilitySetName ‘As Name’ –SubscriptionName
‘The Subscription name’
To remove a VM from an AvailabilitySet:
./Set-ArmVmAvailabilitySet.ps1 –VmName ‘The VM Name’ –ResourceGroup
‘Resource Group’ –AvailabilitySetName 0 –SubscriptionName ‘The
Subscription name’
Download Link
Version 1.01 :
https://gallery.technet.microsoft.com/Set-Azure-Resource-Manager-f7509ec4
Source
That feature isn't implemented yet in the ARM stack, that's why you're not seeing the cmdlet...

Resources