I have copied two VHDs into blob storage as page blobs. Using the SDK API wrappers from C#, how can I let Azure know that one is an OS disk, and one is a data disk? I want to set this up so I can then use the regular v1 portal GUI to create a new VM using the disks I uploaded.
Thanks.
AFAIK, you don't have to do anything special. You can simply create disks out of these page blobs (as long as they are valid VHDs) and start using them.
Once you do this, you should be able to create VM using OS disk and attach data disks to it.
For more information on attaching a disk to VM, you may find this link useful: https://azure.microsoft.com/en-us/documentation/articles/storage-windows-attach-disk/.
Related
I am tying to create a page blob using the storage API and add it as a disk to the Virtual Machine. Is there a way this can be done ?
Currently when I create a blob and add it as the disk , the VM fails with provisioning state failed.
It sounds like you want to create a data disk using Azure Storage SDK for Java to attach page blob as data disk for Linux VM. However, some concepts you understanded are note accurate.
Firstly, you need to create a VHD file on local environment. As references, you can try to follow the below documents to do it.
On Windows, please refer to the document Create and Use a Virtual Hard Disk on Windows 7 to create a VHD file.
On Linux or MacOS, you can install & configure QEMU/VirtualBox/KVM to create a disk image and convert it. For example, to convert a qemu image via command qemu-image convert.
For more information, please see About disks and VHDs for Azure virtual machines
Secondly, you upload the VHD file created to Azure Blob Storage as a page blob via AzCopy or follow the related section of the tutorial Creating and Uploading a Virtual Hard Disk that Contains the Linux Operating System.
Then, you can refer to the document Add a disk to a Linux VM to attach the data disk on Azure Storage.
Meanwhile, based on my understanding, I think you just want to extend the filesystem of your Linux VM. So the other solution may be suitable for your needs, which mount Azure File Storage on Linux VMs using SMB protocol. More details, please refer to How to use Azure File Storage with Linux.
Hope it helps. Any concern, please feel free to let me know.
I am running a VM in Windows Azure. It has two disks attached to it (OS 40GB and DATA 60GB).
In addition to my two VHDs, the Storage has one more 40GB VHD named dmzvyyq2.jja20130312104458.vhd.
I would like to know where this VHD came from and what is using it. Surprisingly the 'LAST MODIFIED' date is yesterday so something must have updated it. I went through all options in the Portal but nothing seems to have this VHD attached.
Ultimately I would like to delete this VHD to save storage space and cost.
One possibility to find out this is by using Storage Analytics. If you have storage analytics enabled, you can view the contents of $logs blob container, download data for the data in question and check for all the activity on this particular blob. You can use a tool like Azure Management Studio from Cerebrata to view storage analytics data. However if you haven't enabled analytics on your storage account, it would be very tough to find out that information.
I currently have a Rackspace Cloud Server that I'd like to migrate to an Azure Virtual Machine. I recently got an MSDN subscription which gives me a certain level of hosting via Azure at no cost, where I'm currently paying for that level of service with Rackspace.
However, one of the nice things about Rackspace is that I can schedule nightly/weekly backups of the VM image. Is there any mechanism for doing this on Azure? I'm worried about protecting against corruption of the database (i.e. what if someone were to run an UPDATE statement and forget the WHERE clause). Is there a mechanism for this with Azure?
I know the VMs are stored as .VHD files in my local Azure storage, but the VM image is 127 gigs. Downloading that nightly even with FIOS internet isn't really going to fly as a solution.
You can perform an asynchronous blob copy to make a physical copy of a vhd. See here for REST API details. This operation is very fast within the same data center (maybe a few seconds?). You don't need to make raw REST calls though: There's a method already implemented in the Azure cross-platform command line interface, available here. The command is:
azure vm disk upload
You can also take blob snapshots, and return to a previous snapshot later. A snapshot is read-only (which you can copy from later) and takes up no space initially. However, as storage pages are changed, the snapshot grows.
One question though: why such a large VM image? Are you storing OS + data on same vhd? If so, it may make more sense to mount a separate Azure Drive (also stored in VHD in blob storage) to store data, and make independent copies / snapshots.
How would I write to a tmp/temp directory in windows azure website? I can write to a blob, but i'm using an NPM that requires me to give it file names so that it can directly write to those filenames.
Are you using Cloud Services (PaaS) or Virtual Machines (IaaS).
If PaaS, look at Windows Azure Local Storage. This option gives you up to 250gb of disk space per core. Its a great location for temporary storage of information in a way that traditional apps will be familiar with. However, its not persistent so if you put anything there you need to make sure will be available if the VM instance gets repaved, then copy it to Blob storage. Also, this storage is specific to a given role instance. So if you have two instances of the same role, they each have their own local storage buckets.
Alternatively, you can use Azure Drive, which allows you to keep the information persisted, but still doesn't allow multiple parallel writes.
If IaaS, then you can just mount a data disk to the VM and write to it directly. Data disks are already persisted to blob storage so there's little risk of data loss.
Just from my understanding and please correct me if anything wrong.
In Windows Azure Web Site, the content of your website will be stored in blob storage and mounted as a drive, which will be used for all instances your web site is using. And since it's in blob storage it's persistent. So if you need the local file system I think you can use the folders under your web site root path. But I don't think you can use the system tmp or temp folder.
I am looking at moving to Windows Azure rather than typical hosting however I'm unsure how best to store images. After searching I've found that there are 2 possible solutions - Blob storage or Azure drive.
I have looked into Blob storage and although I have begun to get used to the idea it will require quite a lot of modification to our CMS. In my searching I have just stumbled across Azure Drive which if I understand correctly creates a virtual hard drive which allows your application to run as it would on a normal server.
Are there any disadvantages to Azure Drive over blob storage? It sounds like migrating current applications to Azure will be much easier with Azure Drive rather than Blob storage but I just wanted to check that there weren't any major flaws in this.
Thanks
Pat
Yes, there are quite a few differences. First, the Windows Azure drive is actually a VHD uploaded as a page blob and mounted by a driver to provide a NTFS partition. So, to get any data on it, you must mount it (or a snapshot). Data is not directly accessible without mounting.
Next, Drives can only be mounted for RW by one instance. If you want anything else to even read that drive, you must snapshot and mount, which introduces a 'staleness' problem to read only instances that are mounting snapshots. You can work around this with an SMB share, but that is slightly complicated.
You would lose the ability to automatically get CDN capabilities if you used a drive as well.
Drives are great for their intended purpose - getting applications that must use NTFS to work in Windows Azure.
If you were to use Blobs natively, you would a.) get the storage subsystem to scale and remove the load from your instances for serving the data and b.) be able to use the CDN to get geoscale on the
images as well.
While it is some work, I would strongly recommend putting images in blob storage. It is ideal for it.