How to attach my uploaded vhd to a virtual machine in Azure? - azure

I have uploaded successfully my 1TB vhd (not containing Windows files) to Azure storage.
Now I want to attach it as a second drive to my virtual machine but in the attach list I can find only the "attach an empty disk" option!
I used Add-AzureVhd to upload the vhd file:
Creating new page blob of size 999653638656...
I linked the storage resource in Cloud Service but the vhd is still not available to mount.
The container of the storage where I uploaded my vhd is the same with the one where C: drive of my VM is saved.
The container access is set to private.
Will it help if I change it to Public Blob or Public Container?
What else to try?
Thanks

Take a look at the PowerShell command Add-AzureDataDisk. This should be what you're looking for, as you can specify media location of the uploaded vhd.
Alternatively, in the portal, go to Virtual Machines and navigate to the Disks tab, where you can create a new disk:
At this point, you can navigate to your uploaded vhd:
After this is done, the new disk should become available for you to add to a Virtual Machine.

It should show the options to attach (Empty Disk and Existing Disk) as show in this link from Azure documentation.
Assuming the above not possible for what ever may be the reason, the alternative is
As the you already claim you are able to see the Attach Empty Disk; you can attach a 1 TB disk and download and put the blob contents there.
You wont be charged for the out-bandwidth as it is all internal

Make sure you used CSUpload and not just pushed the VHD to blob storage. See: http://msdn.microsoft.com/en-us/library/windowsazure/gg466228.aspx

Related

How to copy azure disk from one location to another just by using terraform

The challenge is to have a way to copy osdisk and datadisk from vm located in first location, to another one and of course spawn there a new virtual machine.
So far I found disks-upload-vhd-to-managed-disk-cli article and was able to copy disk between different location by using azcopy utility and creating sas uri links.
As I use terraform everywhere I don't like to use external tools for such job.
I try already abuse azurerm_managed_disk to be able make a copy of my disk to another location but it seams that it's not possible, and those disk need to be in the same place.
So maybe some of you have idea how to make such copy of the disks (or entire vm) in different location just by terraform way and of course I don't mean here to use local-exec to wrap azcopy in it :)
Best Regards.
To copy the managed disk to another region, except the AzCopy command, you can only copy the disk with a generated SAS URL into a storage page blob in another region. And then create a managed disk from the VHD file in that storage blob.
The steps here:
export the disk that you want to copy and then it will generate a SAS URL;
create a storage page blob in the storage container which in the target region from the SAS URL;
create a managed disk from the page blob in the same region.

How to upload a file from azure blob storage to Linux VM created on azure

I have one large file on my azure blob storage container. I want to move my file from blob storage to Linux VM created on azure> How can I do that using data factory? or any Powershell Command?
The easiest and without any tools is to generate SAS token for the blob and run CURL.
Generate SAS
And then CURL
curl <blob_sas_url> -o output.txt
If you need this automated every time you can generate SAS URL from the script or just use AzCopy.
Please reference this blog:How to copy data to VM from blob storage, it gives you a way to solve the problem with Data Factory:
"To anyone who might get into same problem in future, I solved my problem by using 'copy wizard' present in ADF.
We need to install Data Management Gateway on VM and register it before we use 'copy wizard'.
We need to specify blob storage as source and in destination we need to choose 'File Server Share' option. In 'File Server Share' option we need to specify user credentials which I suppose pipeline uses to login to VM, folder on VM where pipeline will copy the data."
From the Azure Blog Storage document, there is another way can help you Mount Blob storage as a file system with blobfuse on Linux.
Blobfuse is a virtual file system driver for Azure Blob storage. Blobfuse allows you to access your existing block blob data in your storage account through the Linux file system. Blobfuse uses the virtual directory scheme with the forward-slash '/' as a delimiter.
This guide shows you how to use blobfuse, and mount a Blob storage container on Linux and access data. To learn more about blobfuse, read the details in the blobfuse repository.
If you want to use AzCopy, you can reference this document Transfer data with AzCopy and Blob storage. You can download the AzCopy for Linux. It provided the command for upload and download files.
For example, upload file:
azcopy copy "<local-file-path>" "https://<storage-account-name>.<blob or dfs>.core.windows.net/<container-name>/<blob-name>"
For PowerShell, you need to use PowerShell Core 6.x and later on all platforms. It works with Windows and Linux virtual machines using Windows PowerShell 5.1 (Windows only) or PowerShell 6 (Windows and Linux).
You can find the PowerShell commands in this document:Quickstart: Upload, download, and list blobs by using Azure PowerShell
Here is another link talked about Copy Files to Azure VM using PowerShell Remoting 6 (Windows and Linux).
Hope this helps.
You have many options to copy content from the blob store to the disk on the VM:
1. Use AzCopy
2. Use Azure Pipelines - File copy task
3. Use Powershell cmdlets
A lot of content is available on these approaches on SO!
It seems this is not properly documented anywhere so I am sharing the most basic approach which is to use the azcopy tool that is available for both windows/linux OS. This approach doens't need the complexity of creating the credentials/tokens.
Download azcopy
Its simple executable which can be run directly after extraction
Create a managed identity(system-assigned identity) for your Virtual machine. Navigate to VM-> Identity -> Turn the Status to 'ON' -> Save
Now the VM can be assigned permission at the following levels:
Storage account
Container (file system)
Resource group
Subscription
For this case, navigate to storage account -> IAM -> Add role assignment -> Select role 'Storage Blob Data Contributor' -> Assign access to 'Virtual machine' -> Select the desired VM -> SAVE
NOTE: If you give access to the VM on IAM properties of a Resource Group, the VM will be able to access all the storage accounts of the RG.
Login to VM and assume the identity (run the command from the same location where the azcopy is located)
For windows : azcopy login --identity
For linux : ./azcopy login --identity
Upload or download the files now:
azcopy cp "source-file" "storageUri/blob-container/" --recursive=true
Example: azcopy cp "C:\test.txt" "https://mystorageaccount.blob.core.windows.net/backup/" --recursive=true
IAM permission can take few minutes to propagate. If you change/add the permissions/access level anywhere, run the azcopy login --identity command again to get the updated identity.
More info on Azcopy is available here

Invalid VHD path error when trying to create Azure Image

I’m trying to create new image using Azure Portal.
When I upload an image as any blob type to container in storage account and try to write a path to it I get the following error: “Invalid VHD blob path. Please make sure the path to the VHD is valid’
The path looks like this: “https://storageAccountName.blob.core.windows.net/containerName/filename”
What am I doing wrong and how to fix it?
Please make sure you have a VHD path with .vhd format like this: https://storageAccountName.blob.core.windows.net/containerName/myUploadedVHD.vhd
Also, it's recommended to upload .vhd files to the page blob,
For more references:
Prepare a Windows VHD or VHDX to upload to Azure
Create a Windows VM from a specialized disk by using PowerShell
Creating An Azure VM From The VHDX/VHD File

Cannot delete blob: There is currently a lease on the blob and no lease ID was specified in the request

When I attempt to delete a blob from my storage account container, I get an error message, "There is currently a lease on the blob and no lease ID was specified in the request."
I have 4 virtual machine instances. I also have 8 virtual machine disks, 4 of which are in use (one by each of the virtual machine instances). Strangely, I have 10 blobs listed in my single storage account's lone container, called vhds. Here is a screenshot of the 10 blobs, highlighting the two that I cannot delete.
Can anyone give me guidance on how to delete these blobs? I have no use for them and I'd like to cut down on my storage costs for my subscription.
You need to delete the disks from the Virtual Machines section of the portal.
Navigate to Virtual Machines -> Disks
Delete the disks
Check this MSDN blog post for the complete instructions:
http://blogs.msdn.com/b/windows_azure_technical_support_wats_team/archive/2013/02/05/iaas-unable-to-delete-vhd-there-is-currently-a-lease-on-the-blob.aspx
Alternatively, you can just kill the lease on the Blobs with PowerShell:
(Get-AzureRmStorageAccount -Name "STORAGE_ACCOUNT_NAME" | Get-AzureStorageBlob -name "CONTAINER_NAME").ICloudBlob.BreakLease()
Just realize when you do this, the VM's that use this storage will not be able to turn on. (And you should turn them off if they aren't already before you do this.
However, if you might use the VM's again in the future this technique allows you to:
Stop the VM in question.
Download a copy of the VHD.
Release the lease on the VHD
Delete the VHD in the storage account.
Insert arbitrary time period where you don't need the VM
Upload the VHD to the same storage account with the same container and same file name.
Start the VM back up and have it work :-).
There is an alternate (easier) way to break a lease if you use (or download) Microsoft Azure Storage Explorer (a really cool tool to manage Azure Storage).
You can browse to the Storage Account and find the relevant file (vhd) and then select the Break Lease option.
The same CAUTIONS above apply and the Explorer tool makes these clear.
You should have images associated to your VMs. Even if you have deleted your VMs, the images have to be explicitly deleted.
Once the images are deleted, you should see VHD getting cleared as well

azure VHD 1TB uploading

We have very slow connection and very small hard disk, How can I create 1 TB VHD for cloud drive on azure?
Do you need to upload an existing VHD or you just need a 1 TB Azure drive for your application in the cloud? If it is former, Rinat is probably right. Look at this blog post for how to write a console app: blogs.msdn.com/b/windowsazurestorage/archive/2010/04/11/using-windows-azure-page-blobs-and-how-to-efficiently-upload-and-download-page-blobs.aspx.
However if you just need 1 TB Azure drive for your application, you can just create one using your code running in the cloud. You can write code similar to what I have written below:
string pageBlobName = "testpageblob";// Guid.NewGuid().ToString();
string blobUri = string.Format("{0}/{1}", blobContainer.Uri.AbsoluteUri,
pageBlobName);
CloudDrive cloudDrive = new CloudDrive(new Uri(blobUri), csa.Credentials);
for (int i = 0; i < 30; i++)
{
try
{
cloudDrive.Create(20);
break;
}
catch (CloudDriveException ex)
{
if (!ex.Message.Equals("ERROR_UNSUPPORTED_OS") || i == 29)
throw;
Thread.Sleep(10000);
}
}
string driveLetter = cloudDrive.Mount(25, DriveMountOptions.Force);
For efficient uploads you can write a simple console app that will upload your VHD to Azure Blob Storage as page blobs, transferring only non-zero bytes.
CSUpload does exactly that: uploads nonzero pages only.
You can create locally a (small) 1TB dynamic VHD and then upload with CSUpload (expanding to fixed size on-the-fly, without sending zero bytes). See also this question about having an 1TB disk in Azure.
you can also create a vm-disk and attach an vm to it and prepare it from there
then afterwards remove the disk from the vm(this will not remove it from storage) and mount it with cloudDrive
(you can delete the vm afterwards)
1TB is a extremely large size for vhd, however a brand new vhd only contain very little data(512 bytes for fixed hard disk image) at the end of file. In other words, the most data of your vhd is zero and there is no need to upload the entire 1TB data to azure storage.
For vhd file format, please refer to http://en.wikipedia.org/wiki/VHD_(file_format)
If you are familiar with vhd and azure storage client, you can write your own application to do these things.
Create an empty 1TB page blob on azure storage with PutBlob operation
http://msdn.microsoft.com/en-us/library/windowsazure/dd179451.aspx
Write the VHD footer to end of the Page Blob with PutPage operation
http://msdn.microsoft.com/en-us/library/windowsazure/ee691975.aspx
If not, you can use diskmgmt.msc(Disk Management) to create a vhd at first, and then use Set-AzureStorageBlobContent in windows azure powershell to efficiently upload your vhd.
Set-AzureStorageBlobContent will skip the useless data when uploading page blob.
Set-AzureStorageBlobContent -Container upload -File .\a.vhd -BlobType Page
For reference,
http://msdn.microsoft.com/en-us/library/dn408487.aspx

Resources