Azure - copy files from VM to a storage account - azure

I have prepared a script for executing .ps1 script on a Windows in Azure. Powershell script is stored on a storage account. Script output is directed to a local C:\ drive on a Windows VM. In order to run a script on a VM i am using Custom Script Extension, extract from my code below. Question is, how can I copy received output.csv from my C:\ drive on a Windows VM to a storage account?
##Storage Account credentials
$StorageAccountName = (Get-AutomationPSCredential -Name 'xxx').UserName
$StorageAccountKey = (Get-AutomationPSCredential -Name 'xxx').GetNetworkCredential().Password
$Context = New-AzureStorageContext -StorageAccountName $StorageAccountName -StorageAccountKey $StorageAccountKey
$CSEName = ((Get-AzureRmVm -ResourceGroupName $ResourceGroup -Name $VM |
Where-Object {$_.Extensions.VirtualMachineExtensionType -eq "CustomScriptExtension"}).Extensions |
Where-Object {$_.VirtualMachineExtensionType -eq "CustomScriptExtension"}).Name
write-output "CSE Name: $CSEName"
Set-AzureRmVMCustomScriptExtension -ResourceGroupName $ResourceGroup `
-VMName $VM `
-FileName $psfilename `
-ContainerName $ContainerName `
-StorageAccountName $StorageAccountName `
-StorageAccountKey $StorageAccountKey `
-Run $psfilename `
-Location "North Europe"`
-Name $CSEName
$LogFilePath = "C:\Users\xxx\Desktop\Output.csv"

Note: I re-edited my entire Ansewr
if I understand your script correctly, you are using it for internal usage.
in that case, the solution is very simple.
as you have all the information on the source and destination, all you need is to connect them to the proper Blob to Copy.
You will need either a Storage Account Access-Key (easy to get/ use but less secure)
or a SAS with at least the following Privileges =Read, wright, List. (a bit harder to configure but more secure)
here is an example using Storage Account Access-Key:
$ResourceGroupName = "Resource-Group"
$storageAccountName = "Storge account"
#full path to the file
$localPath = "C:\test\vm01.5f9e0fc2-cf60-444f-9cba-61c6f66b6373.serialconsole.log"
#blob container name
$storageContainerName = "test"
#Blob name (the name of the file in after the transfer)
$BlobName = "destFileName.log"
#getting the SA Key
$SAkey = (Get-AzStorageAccountKey -ResourceGroupName $ResourceGroupName -AccountName $storageAccountName).Value[0]
#getting Storage account context
$destinationContext = New-AzStorageContext -StorageAccountName $storageAccountName -StorageAccountKey $SAkey
#copy the file
Set-AzStorageBlobContent -File $localPath -Container $storageContainerName -Blob $BlobName -Context $destinationContext
the script is working if your connected to azure, if not, you can pass a SAS URL instedof the $SAkey and it will work the same.
to run it, just add at the end of your ps1 script.

Related

Powershell script to create a snapshot and store it into a storage account in another location

Using powershell script, I need to create a snapshot of a VM and save the snapshot in a storage account which is in a different region. The snapshot name should also contain the date on which it was taken, so that it can be auto deleted after 30 days. Do let me know how to achieve this.
Also another major issue I am facing is how to store the snapshot in the storage account without using keys directly in the script.
This is the old script which I am using, it does not has the date in the snapshot name feature and uses storage account keys directly in the script, which is not secure.
#powershell script to create a snapshot
Select-AzSubscription -SubscriptionName 'subs name'
$subscriptionId = 'xxxxxx'
$resourceGroupName = "Rgname"
$vmName="VMname"
$Location = "East US"
#how to get-date in the name of the snap
$snapshotName = "snapname"
$vmOSDisk=(Get-AzVM -ResourceGroupName $resourceGroupName -Name $vmName).StorageProfile.OsDisk.Name
$Disk = Get-AzDisk -ResourceGroupName $resourceGroupName -DiskName $vmOSDisk
$SnapshotConfig = New-AzSnapshotConfig -SourceUri $Disk.Id -CreateOption Copy -Location $Location
$Snapshot=New-AzSnapshot -Snapshot $snapshotConfig -SnapshotName `
$snapshotName -ResourceGroupName $resourceGroupName
#powershell script to convert snapshot into managed disks
$diskName = 'ManagedDiskname'
#Provide the size of the disks in GB. It should be greater than the VHD file size.
$diskSize = '128'
$storageType = 'Premium_LRS'
Select-AzSubscription -SubscriptionId $SubscriptionId
$snapshot = Get-AzSnapshot -ResourceGroupName $resourceGroupName -SnapshotName $snapshotName
$diskConfig = New-AzDiskConfig -SkuName $storageType -Location $location -CreateOption Copy -SourceResourceId $snapshot.Id
New-AzDisk -Disk $diskConfig -ResourceGroupName $resourceGroupName -DiskName $diskName
#powershell script to save managed disk into a storage account which is in a different location
$sasExpiryDuration = "3600"
$storageAccountName = "storageacctname"
$storageContainerName = "containername"
$storageAccountKey = '(Get-AzStorageAccountKey -ResourceGroupName "Snapshot-Powershell" -AccountName "storageforsnap")'
#Provide the key of the storage account where you want to copy the VHD of the managed disk.
$storageAccountKey = 'xxxxxx'
$destinationVHDFileName = "vhdfilename"
.
$useAzCopy = 1
Select-AzSubscription -SubscriptionId $SubscriptionId
$sas = Grant-AzDiskAccess -ResourceGroupName $ResourceGroupName -DiskName $diskName -DurationInSecond $sasExpiryDuration -Access Read
$destinationContext = New-AzStorageContext -StorageAccountName $storageAccountName -StorageAccountKey $storageAccountKey
#Copy the VHD of the managed disk to the storage account
if($useAzCopy -eq 1)
{
$containerSASURI = New-AzStorageContainerSASToken -Context $destinationContext -ExpiryTime(get-date).AddSeconds($sasExpiryDuration) -FullUri -Name $storageContainerName -Permission rw
azcopy copy $sas.AccessSAS $containerSASURI
}else{
Start-AzStorageBlobCopy -AbsoluteUri $sas.AccessSAS -DestContainer $storageContainerName -DestContext $destinationContext -DestBlob $destinationVHDFileName
}
1. Azure snapshot can be auto deleted after 30 days
As far as I knew, Azure does not provide the feature. But we can implement it via a schedule task.
For example
Enable Run As account in Azure automation account
Install module Az.Automation Az.Accounts and Az.Compute in the automation account. Regarding how to install, please refer to here
Create Azure Powershell runbook with the following script in the automation ccount. For more details, please refer to here.
#get the snpshots created before 30 days
Get-AzSnapshot| Where-Object{($_.TimeCreated -lt ([datetime]::UtcNow.AddDays(-30)))}
foreach($snp in $snps){
$snp| Remove-AzSnapshot -Force
}
Create a schedule for the Azure runbook.
2. How to securely connect Azure blob
If you want to securely connect Azure blob, we can implement it with Azure AD auth. For more details, please refer to here.
For example
Assign Storage Blob Data Contributor role to user or sp
New-AzRoleAssignment -SignInName <email> `
-RoleDefinitionName "Storage Blob Data Contributor" `
-Scope "/subscriptions/<subscription>/resourceGroups/sample-resource-group/providers/Microsoft.Storage/storageAccounts/<storage-account>"
Script
Connect-AzAccount
$ResourceGroupName=""
$snapshotName=""
$sasExpiryDuration=3600
$sas =Grant-AzSnapshotAccess -SnapshotName $snapshotName -ResourceGroupName $ResourceGroupName -DurationInSecond $sasExpiryDuration -Access Read
$storageAccountName=""
$destinationContext = New-AzStorageContext -StorageAccountName $storageAccountName -UseConnectedAccount
$storageContainerName="image"
$destinationVHDFileName="test.vhd"
Start-AzStorageBlobCopy -AbsoluteUri $sas.AccessSAS -DestContainer $storageContainerName -DestContext $destinationContext -DestBlob $destinationVHDFileName
#check copy state
Get-AzStorageBlobCopyState -Container $storageContainerName -Blob $destinationVHDFileName -Context $destinationContext

Download contents of azure blob container to a Linux machine using Get-AzStorageBlobContent

I have some data that resides in a blob container and it is stored in folders. I use Get-AzStorageBlobContent to download the contents of the blob as described in this answer.
It works fine on Windows whereas on a Linux machine it does not keep the folder structure as it prepend the folder name to those individual file names. I was wondering if Get-AzStorageBlobContent is not cross platform or am I am missing something. The version of Azure Powershell that I am using for both windows and linux is 3.8. I had a look through the documentation for any cross platform compatibility issues for Get-AzStorageBlobContent but does not seem to mention anything.
This is what I use to download the contents of the blob in windows.
$SAName = "your storage account name"
$ConName = "your container name"
$key = "your storage account key"
$Ctx = New-AzureStorageContext -StorageAccountName $SAName -StorageAccountKey $Key
$List = Get-AzureStorageBlob -Container $ConName -Context $Ctx
$List = $List.name
foreach ( $l in $list ){
Get-AzureStorageBlobContent -Blob $l -Container $conname -Context $ctx
}

Reseting Azure VM deployed from an image using JSON template

I am deploying Azure VMs (with unmanaged disks), that are based on a VHD image. JSON templates used for deployment are stored in my Azure subscription.
Sometimes I need to reset the machine to the original state - the manual way to achieve this through the Azure web portal is:
Open the resource group, delete VM (while keeping other resources).
Going to the storage account and deleting VHD that served as OS disk for the machine.
Go back to the Resource group -> Deployment -> select last Deployment -> Redeploy.
I want to do this programmatically using PowerShell. All the steps are quite easily achievable except for the last one - running redeployment.
This is my PowerShell code:
# Authenticate to Azure Account
Login-AzAccount
$vm = Get-AzVM | Out-GridView -Title "Select machine to be reset to factory state" -PassThru
$groupName = $vm.ResourceGroupName
#Stop the VM
Stop-AzureRmVM -ResourceGroupName $vm.ResourceGroupName -Name $vm.Name -Force
#Delete VM
#Remove-AzureRmVM -ResourceGroupName $vm.ResourceGroupName -Name $vm.Name
#Getting storage context, blob name and deleting VHD (blob)
$disk = $vm.StorageProfile.OsDisk
$storageAccount = Get-AzStorageAccount -ResourceGroupName "myStorageAccountResourceGroupName" -Name "myStorageAccountName"
#Get storage context
$storageKey = (Get-AzStorageAccountKey -ResourceGroupName $storageAccount.ResourceGroupName -Name $storageAccount.StorageAccountName)[0].Value
$context = New-AzStorageContext -StorageAccountName $storageAccount.StorageAccountName -StorageAccountKey $storageKey
$container = Get-AzStorageContainer -Context $context -Name 'vhds'
$blobName = $disk.Name + ".vhd"
$blob = Get-AzStorageBlob -Container $container.Name -Context $context -Blob $blobName
#Delete Blob
$blob | Remove-AzStorageBlob
Now for the last step - I can get the last Resource group deployment and set up a new deployment with -RollbackToLastDeployment parameter.
#Redeploy Group
$deployments = Get-AzResourceGroupDeployment $groupName
$deployment = $deployments[$deployments.size - 1]
New-AzResourceGroupDeployment -Name $deployment.DeploymentName -ResourceGroupName $groupName -TemplateFile <Expects template in local storage> -RollbackToLastDeployment
The problem is that the New-AzResourceGroupDeployment command expects a JSON template that is on my local disk, but I have my templates stored in the Azure subscription.
Is there any way to use a template that is located in Azure subscription for redeployment of a resource group?
No matter where is the template file located, you could convert/copy the template to the .json file in local, then upload it to the storage, then you will be able to use the -TemplateUri parameter to deploy the remote template.
Sample:
Set-AzCurrentStorageAccount -ResourceGroupName ManageGroup -Name {your-unique-name}
# get the URI with the SAS token
$templateuri = New-AzStorageBlobSASToken -Container templates -Blob storage.json -Permission r `
-ExpiryTime (Get-Date).AddHours(2.0) -FullUri
# provide URI with SAS token during deployment
New-AzResourceGroup -Name ExampleGroup -Location "South Central US"
New-AzResourceGroupDeployment -ResourceGroupName ExampleGroup -TemplateUri $templateuri
For more details, you could refer to this link.
Update:
Seems we could not find the uri of the Template(preview) in the portal, my workaround is copy the template as a .json file in local manually, then upload to the azure blob storage, then use the sample above.
Follow the steps:
1.In the portal, click the View Template, you can copy the template and save it as a .json file in local.
2.Then go to the container of your storage account, upload the .json file.
3.Click the ... of your .json file -> Generate SAS -> Generate blob SAS token and URL, copy the Blob SAS URL, it is the $templateuri what you need in the New-AzResourceGroupDeployment -ResourceGroupName ExampleGroup -TemplateUri $templateuri. Or you can use New-AzStorageBlobSASToken generate it like the sample above.

How do I clone an Azure Managed Disk into a different subscription?

Using Azure VMs and managed disks (using the ARM deployment model), I have recently run into the following problem I would like to solve: In order to get production data out from a managed disk for testing purposes, I would like to clone a production data disk from the "Production Subscription" into a managed disk in the "Development Subscription", where I can play around with the data in a safe way.
We are talking quite a lot of data (200 GB+), so that an actual "copying" process would take far too much time. I want to be able to automate things and provision new environments in - let's say, under half an hour.
Cloning a managed disk within a subscription (given it's in the same region) is very simple and fast, I just have to specify a --source to the az disk create command. This does not work across subscriptions obviously, at least because the logged in user/service principal for the development subscription does not have access to the production subscription resources.
What I have tried so far:
Using az disk grant-access to retrieve an SAS URI for the managed disk; this thing is not accepted as a --source for az disk create though (it says VHD SAS links would work though...)
Any ideas?
I did this:
$RG = "youresourcegroup"
$Location = "West US 2"
$StorageAccName = "yourstorage"
$SkuName = "Standard_LRS"
$Containername = "images"
$Destdiskname = “yorblob.vhd”
$SourceSASurl = "https://yoursaasurl"
Login-AzureRmAccount
New-AzureRmResourceGroup -Name $RG -Location $Location
New-AzureRmStorageAccount -ResourceGroupName $RG -Name $StorageAccName -SkuName $SkuName -kind Storage -Location $Location
$Storageacccountkey = Get-AzureRmStorageAccountKey -ResourceGroupName $RG -Name $StorageAccName
$Storagectx = New-AzureStorageContext -StorageAccountName $StorageAccName -StorageAccountKey $Storageacccountkey[0].Value
$Targetcontainer = New-AzureStorageContainer -Name $Containername -Context $storagectx -Permission Blob
$sourceSASurl = $mdiskURL.AccessSAS
$ops = Start-AzureStorageBlobCopy -AbsoluteUri $SourceSASurl -DestBlob $Destdiskname -DestContainer $Containername -DestContext $Storagectx
Get-AzureStorageBlobCopyState -Container $Containername -Blob $Destdiskname -Context $Storagectx -WaitForComplete
After this you will have a copy of managed disk in your subscription stored as a regular blob.
Be careful, you should obtain SAS URL from Production subscription, but in the script you should login to a Development subscription.
Next you can go to the Azure Portal and convert the blob to managed disk.
Go to Azure portal --> More Services --> Disks or directly browse this URL https://portal.azure.com/#create/Microsoft.ManagedDisk-ARM
Click +Add
Select source as storage blob
Select your vhd using source blob field.
Here's the script I wrote to migrate all managed disks for each VM from one subscription to another. I hope this helps you.
# This script will get ALL VMs in a subscription and then migrate the disks
if the VM has managed disks
# Created by Joey Brakefield -- #kfprugger & https://www.linkedin.com/in/joeybrakefield/
#set global variables
$sourceSubscriptionId='6a1b5e5e-df06-4608-a7a2-6984f7abacd8'
select-azurermsubscription -subscriptionid $sourceSubscriptionId
$vms = get-azurermvm
$targetSubscriptionId='929e0340-bf36-45a2-8347-47f86b4715de'
#looping logic for each of the VMs that have managed disks
foreach ($vm in $vms) {
select-azurermsubscription -subscriptionid $sourceSubscriptionId
$vmrg = get-azurermresourcegroup -name $vm.ResourceGroupName
$vmname = $vm.name
Write-Host = "Working with: " $vmname " in " $vmrg -foregroundcolor Green
Write-Host ""
#This command will only target managed disks because unmanaged use the storage account locations rather than the /disks provider URIs
if (Get-AzureRmDisk | ? {$_.OwnerId -like "/subscriptions/"+$sourceSubscriptionId +"/resourceGroups/"+$vmrg.resourcegroupname+"/providers/Microsoft.Compute/virtualMachines/"+$vm.name})
{
#Sanity Check
#Read-host "Look correct? If not, CTRL-C to Break"
$manageddisk = Get-AzureRmDisk | ? {$_.OwnerId -like "/subscriptions/"+$sourceSubscriptionId +"/resourceGroups/"+$vmrg.resourcegroupname+"/providers/Microsoft.Compute/virtualMachines/"+$vm.name}
Select-AzureRmSubscription -SubscriptionId $targetSubscriptionId
#check to see if RG exists in the new CSP/Subscription
Get-AzureRmResourceGroup -Name $vmrg.resourcegroupname -ev notPresent -ea 0
write-host "Checking to see if"$vmrg.resourcegroupname"exists in subscriptionid"$targetSubscriptionId -foregroundcolor Cyan
Write-Host ""
if ($notPresent)
{
new-azurermresourcegroup -name $vmrg.resourcegroupname -location $vmrg.location
"Resource Group " + $vmrg.resourcegroupname + " has been created"
} else {"Resource Group " + $vmrg.resourcegroupname + " already exists"}
# Move the disks after all checks are done
foreach ($disk in $managedDisk){
$managedDiskName = $disk.Name
$targetResourceGroupName = $vmrg.resourcegroupname
$diskConfig = New-AzureRmDiskConfig -SourceResourceId $disk.Id -Location $disk.Location -CreateOption Copy
New-AzureRmDisk -Disk $diskConfig -DiskName $Disk.Name -ResourceGroupName $targetResourceGroupName}
}
}
You can use the following commands in Azure CLI -
# Source storage account name
STORAGE1=sourcestorage
#Security key of the source storage account
STORAGEKEY1= SampleKey0qNzttE/EX3hHfcFIzkQQmqXklRU2Z2uANICw==
#Container containing the source VHD
CONTAINER1=sourcevhds
# Name of VHD to be copied (name only, not full url)
DISK=DiskToBeCopied.vhd
#Specify the above properties for target
STORAGE2=targetstorage
STORAGEKEY2= SampleKeyAb6FYP3EqFVEcN2cc5wO QHzXvdc7Gzh1qRt0FXKq6w==
CONTAINER2= targetvhds
After setting the above parameters, execute the following command in Azure CLI -
azure storage blob copy start --account-name $STORAGE1 --account-key $STORAGEKEY1 --source-container $CONTAINER1 --source-blob $Disk --dest-account-name $STORAGE2 --dest-account-key $STORAGEKEY2 --dest-container $CONTAINER2

Reducing size of Azure VM OS Disk for download/export

I would like to create an Azure VM with a smaller OS disc than the default 127gb. I've been unable to find such an option in the Azure Portal, so I have attempted to shrink the disk. I have not been successful.
I understand I can trim (using the defragmentation tool) and shrink the volume (with Disk Management) but this won't change the "physical" size of the hard disk. That is, if I shrink the disk to 40gb, there will just be 87gb unallocated and the blob will still report 127gb.
What I am attempting to achieve is to shrink the blob to match the allocated space facilitating smaller downloads/exports of the VM image (e.g. 40 vs 127gb).
Any and all help is appreciated.
I have written a blog post detailing this answer in full. But the main issue here was being able to reduce the size of the Azure VM which defaults to 127gb in order to allow for fasted export/download. The way I have achieved this is by trimming the hard drive and then using Disk2VHD to create a VHD file of the running VM. Disk2VHD will create an expandable disk that is only as large as the current data on the disc, not the entire available disk. In my case 40gb vs 127gb. If one saves this VHD file to an attached disk (read: blob storage) it can be easily downloaded via HTTP by your entire team. Thus, the download is now 40gbs instead of 127gbs. For more, please read my detailed blog post:
http://www.kevinmcloutier.com/?p=263
Original link no longer works:
https://web.archive.org/web/20161027213258/http://kevinmcloutier.com/post/4
In Azure if you create a VM it will create with some default configuration. At present it is not supported to reduce/shrink the OS disk (managed or unmanaged) size of an Azure VM from the Azure Portal (say from 128Gb to 32Gb for example), using the following process we can archive that, and cut down the disk cost.
Step 1. Open your VM and go to the Disk Management.
Step 2. Open PowerShell and execute the following command.
After successful execution you can find the following
Step 3. Now deallocate the VM from the Azure portal
Step 4. Now Go the Properties blade of the disk and copy the Resource ID
Step 5. Now execute the following PowerShell script from your local system. Must change $DiskID, $VMName, $AzSubscription with your value
# Variables
$DiskID = ""# eg. "/subscriptions/203bdbf0-69bd-1a12-a894-a826cf0a34c8/resourcegroups/rg-server1-prod-1/providers/Microsoft.Compute/disks/Server1-Server1"
$VMName = "VM-Server1"
$DiskSizeGB = 32
$AzSubscription = "Prod Subscription"
# Script
# Provide your Azure admin credentials
Connect-AzAccount
#Provide the subscription Id of the subscription where snapshot is created
Select-AzSubscription -Subscription $AzSubscription
# VM to resize disk of
$VM = Get-AzVm | ? Name -eq $VMName
#Provide the name of your resource group where snapshot is created
$resourceGroupName = $VM.ResourceGroupName
# Get Disk from ID
$Disk = Get-AzDisk | ? Id -eq $DiskID
# Get VM/Disk generation from Disk
$HyperVGen = $Disk.HyperVGeneration
# Get Disk Name from Disk
$DiskName = $Disk.Name
# Get SAS URI for the Managed disk
$SAS = Grant-AzDiskAccess -ResourceGroupName $resourceGroupName -DiskName $DiskName -Access 'Read' -DurationInSecond 600000;
#Provide the managed disk name
#$managedDiskName = "yourManagedDiskName"
#Provide Shared Access Signature (SAS) expiry duration in seconds e.g. 3600.
#$sasExpiryDuration = "3600"
#Provide storage account name where you want to copy the snapshot - the script will create a new one temporarily
$storageAccountName = "shrink" + [system.guid]::NewGuid().tostring().replace('-','').substring(1,18)
#Name of the storage container where the downloaded snapshot will be stored
$storageContainerName = $storageAccountName
#Provide the key of the storage account where you want to copy snapshot.
#$storageAccountKey = "yourStorageAccountKey"
#Provide the name of the VHD file to which snapshot will be copied.
$destinationVHDFileName = "$($VM.StorageProfile.OsDisk.Name).vhd"
#Generate the SAS for the managed disk
#$sas = Grant-AzureRmDiskAccess -ResourceGroupName $resourceGroupName -DiskName $managedDiskName -Access Read -DurationInSecond $sasExpiryDuration
#Create the context for the storage account which will be used to copy snapshot to the storage account
$StorageAccount = New-AzStorageAccount -ResourceGroupName $resourceGroupName -Name $storageAccountName -SkuName Standard_LRS -Location $VM.Location
$destinationContext = $StorageAccount.Context
$container = New-AzStorageContainer -Name $storageContainerName -Permission Off -Context $destinationContext
#Copy the snapshot to the storage account and wait for it to complete
Start-AzStorageBlobCopy -AbsoluteUri $SAS.AccessSAS -DestContainer $storageContainerName -DestBlob $destinationVHDFileName -DestContext $destinationContext
while(($state = Get-AzStorageBlobCopyState -Context $destinationContext -Blob $destinationVHDFileName -Container $storageContainerName).Status -ne "Success") { $state; Start-Sleep -Seconds 20 }
$state
# Revoke SAS token
Revoke-AzDiskAccess -ResourceGroupName $resourceGroupName -DiskName $DiskName
# Emtpy disk to get footer from
$emptydiskforfootername = "$($VM.StorageProfile.OsDisk.Name)-empty.vhd"
# Empty disk URI
#$EmptyDiskURI = $container.CloudBlobContainer.Uri.AbsoluteUri + "/" + $emptydiskforfooter
$diskConfig = New-AzDiskConfig `
-Location $VM.Location `
-CreateOption Empty `
-DiskSizeGB $DiskSizeGB `
-HyperVGeneration $HyperVGen
$dataDisk = New-AzDisk `
-ResourceGroupName $resourceGroupName `
-DiskName $emptydiskforfootername `
-Disk $diskConfig
$VM = Add-AzVMDataDisk `
-VM $VM `
-Name $emptydiskforfootername `
-CreateOption Attach `
-ManagedDiskId $dataDisk.Id `
-Lun 63
Update-AzVM -ResourceGroupName $resourceGroupName -VM $VM
$VM | Stop-AzVM -Force
# Get SAS token for the empty disk
$SAS = Grant-AzDiskAccess -ResourceGroupName $resourceGroupName -DiskName $emptydiskforfootername -Access 'Read' -DurationInSecond 600000;
# Copy the empty disk to blob storage
Start-AzStorageBlobCopy -AbsoluteUri $SAS.AccessSAS -DestContainer $storageContainerName -DestBlob $emptydiskforfootername -DestContext $destinationContext
while(($state = Get-AzStorageBlobCopyState -Context $destinationContext -Blob $emptydiskforfootername -Container $storageContainerName).Status -ne "Success") { $state; Start-Sleep -Seconds 20 }
$state
# Revoke SAS token
Revoke-AzDiskAccess -ResourceGroupName $resourceGroupName -DiskName $emptydiskforfootername
# Remove temp empty disk
Remove-AzVMDataDisk -VM $VM -DataDiskNames $emptydiskforfootername
Update-AzVM -ResourceGroupName $resourceGroupName -VM $VM
# Delete temp disk
Remove-AzDisk -ResourceGroupName $resourceGroupName -DiskName $emptydiskforfootername -Force;
# Get the blobs
$emptyDiskblob = Get-AzStorageBlob -Context $destinationContext -Container $storageContainerName -Blob $emptydiskforfootername
$osdisk = Get-AzStorageBlob -Context $destinationContext -Container $storageContainerName -Blob $destinationVHDFileName
$footer = New-Object -TypeName byte[] -ArgumentList 512
write-output "Get footer of empty disk"
$downloaded = $emptyDiskblob.ICloudBlob.DownloadRangeToByteArray($footer, 0, $emptyDiskblob.Length - 512, 512)
$osDisk.ICloudBlob.Resize($emptyDiskblob.Length)
$footerStream = New-Object -TypeName System.IO.MemoryStream -ArgumentList (,$footer)
write-output "Write footer of empty disk to OSDisk"
$osDisk.ICloudBlob.WritePages($footerStream, $emptyDiskblob.Length - 512)
Write-Output -InputObject "Removing empty disk blobs"
$emptyDiskblob | Remove-AzStorageBlob -Force
#Provide the name of the Managed Disk
$NewDiskName = "$DiskName" + "-new"
#Create the new disk with the same SKU as the current one
$accountType = $Disk.Sku.Name
# Get the new disk URI
$vhdUri = $osdisk.ICloudBlob.Uri.AbsoluteUri
# Specify the disk options
$diskConfig = New-AzDiskConfig -AccountType $accountType -Location $VM.location -DiskSizeGB $DiskSizeGB -SourceUri $vhdUri -CreateOption Import -StorageAccountId $StorageAccount.Id -HyperVGeneration $HyperVGen
#Create Managed disk
$NewManagedDisk = New-AzDisk -DiskName $NewDiskName -Disk $diskConfig -ResourceGroupName $resourceGroupName
$VM | Stop-AzVM -Force
# Set the VM configuration to point to the new disk
Set-AzVMOSDisk -VM $VM -ManagedDiskId $NewManagedDisk.Id -Name $NewManagedDisk.Name
# Update the VM with the new OS disk
Update-AzVM -ResourceGroupName $resourceGroupName -VM $VM
$VM | Start-AzVM
start-sleep 180
# Please check the VM is running before proceeding with the below tidy-up steps
# Delete old Managed Disk
Remove-AzDisk -ResourceGroupName $resourceGroupName -DiskName $DiskName -Force;
# Delete old blob storage
$osdisk | Remove-AzStorageBlob -Force
# Delete temp storage account
$StorageAccount | Remove-AzStorageAccount -Force
You would have to create your own VM image and then deploy using that. This template shows you how to deploy using your own image.
https://github.com/Azure/azure-quickstart-templates/tree/master/101-vm-from-user-image
Currently, the images in the gallery are all 127gb. Since Azure VMs only used fixed size discs, you can't just select the size.
Sapnandu's solution works. Many thanks!
I'm still not 100% sure what was the magic criteria to make it work, but finally I have a vm running with the reduced disk. Making it Gen1 definitely was one.
I tried many similar things and they all got stuck at boot time.
My way was:
Create the vm from the gallery images. You'll have a 128Gb OS disk.
Tweak the vm according to your needs. Here you can run the resize script. So, you'll end up with a ~90Gb unallocated space.
After a few failed attempts I started to value the configured vm and I really didn't want to repeat the configuration again.
Make a snapshot of this disk then make a disk from the snapshot. These options will be presented to you by the snapshot and by the disk.
Then make a new vm from the disk. Check if it boots. Boot diagnostics shows it rather quickly.
After stopping the new vm I looked up the 3 (subscriptionID,diskID,vmName) required parameters for Sapnandu's script (Step 5. in his post) and executed the script in azure cloud shell. (the icon in header)
It takes time, about 10 minutes or so.
When the script finished the new vm was running with the smaller disk.
So, there is a vm setup where the script works perfectly.

Resources