I want to take an image of currently running Linux VM to set a reserved IP. But the problem is that I dont want to lose any data of current VM. When i checked in the site, it says that you want to reset the data to create a generalized image of that VM. So if I want to take a specialized VM, how to do that? could you please give the syntax? Is am doing right way to take specialized image of VM?
You could look to capture a snapshot of the underlying Linux .vhd page blob in Azure storage, and work with that. There are several blog posts and guidance posts all over the internet here akin to using blob snapshots to backup Azure virtual machines.
Related
I have uploaded a back up of onprem OS disk to azure RM storage account. I want to create an image out of that os disk to provision vms in azure.
Please let me know if that is possible ?
Assuming your base system is running Windows, and you don't want to sysprep the machine, you're trying to create what's called a Specialized image in Azure. The best steps to follow to complete this are:
Create the Specialized image
Create a VM from an image
The first thing that you need to make sure that the disk is in correct format i.e VHD for azure. There are many third party tools available to convert the disk to VHD (very easy if it's hyper v machine).
Secondly do create the necessary infrastructure in azure like storage account to upload the disk and virtual network that would connect to your machine, resource group etc. Also this is currently only possible through powershell and not through portal.
Azure migarte has now made it very to migrate large number of vms if you are considering a production( much better than its was last year).
The question says unable to migrate is disk so I assume you must have gone through Microsoft document and than must have faced problem. Can you provide me the error you got while uploading?
I am tying to create a page blob using the storage API and add it as a disk to the Virtual Machine. Is there a way this can be done ?
Currently when I create a blob and add it as the disk , the VM fails with provisioning state failed.
It sounds like you want to create a data disk using Azure Storage SDK for Java to attach page blob as data disk for Linux VM. However, some concepts you understanded are note accurate.
Firstly, you need to create a VHD file on local environment. As references, you can try to follow the below documents to do it.
On Windows, please refer to the document Create and Use a Virtual Hard Disk on Windows 7 to create a VHD file.
On Linux or MacOS, you can install & configure QEMU/VirtualBox/KVM to create a disk image and convert it. For example, to convert a qemu image via command qemu-image convert.
For more information, please see About disks and VHDs for Azure virtual machines
Secondly, you upload the VHD file created to Azure Blob Storage as a page blob via AzCopy or follow the related section of the tutorial Creating and Uploading a Virtual Hard Disk that Contains the Linux Operating System.
Then, you can refer to the document Add a disk to a Linux VM to attach the data disk on Azure Storage.
Meanwhile, based on my understanding, I think you just want to extend the filesystem of your Linux VM. So the other solution may be suitable for your needs, which mount Azure File Storage on Linux VMs using SMB protocol. More details, please refer to How to use Azure File Storage with Linux.
Hope it helps. Any concern, please feel free to let me know.
I am using two Microsoft Azure Virtual Machines (marked as classic), both running on Linux. One is used for test purposes and internal demos, the other is production and running few of clients' instances.
What I would like to do is change the size of Virtual Machine. I understand this is quite common process and can easily be done from the Azure Management Portal and that this is not affecting data. However, when I have changed the size of our testing machine, exactly this has happened and we have lost all data.
Azure Support answer received was:
"We recommend you delete the VM by keeping the attached disks and create a new VM with the required size." Not sure why this would be better?
Any data stored on the ephemeral (internal-to-chassis) scratch disk is at risk, as it's a non-durable disk (and will in all likelihood be destroyed/recreated upon resizing a VM).
The only way to have durable data is to use Azure Storage (blobs, vhd as attached disk, Azure File storage) or external database. Azure Storage is durable (minimum 3 copies), and is not stored with your VM.
One more thing: The VM's OS Disk is a VHD in Azure Storage (so the OS disk is durable, just like attached vhd's).
You have more than one way to do that and keep in mind what David said, data on OS disks, attached disks and blobs is the only durable one.
To prevent losing data and since you're using Classic VMs, you can do the following:
1- Go to your VM on portal and capture an image out of it.
2- Go to your new image and create a new VM out of it, while specifying the new specs that you need.
3- When done, connect to your new VM while keeping the old one without termination.
4- Check if all your data is there, if yes, then you can remove the old one. (In case you need the old IP, you can still assign it to the new one).
Cheers.
I have an instance medium in windows azure.
I need an image to make new instance large, so when I create an image, It say you must delete it as part of operation.
So, how can i make image instance medium without deleting current virtual machine??
note: Amazon cloud service can make image without deleting instance. That includes microsoft server.
Actually how to do create image with minimum downtime. That's the true purpose of this question.
There are a couple of objectives I read from this. The first is that you want to make a medium instance large. You can change the size of a virtual machine without deleting it. Go to the configure tab for the VM and change the size. This will require a reboot, but it will keep your virtual machine in tact.
The second is to create an image with minimum downtime. As you know, this is not possible without destroying your existing VM. The details of sysprepping a machine are the reason (won't go into those details here). You could create a new virtual machine from your existing one and sysprep that copy though. At least that way you're not losing any downtime while you're creating the image. Not sure how helpful that is for your scenario. Personally, I would just re-size your existing VM if that's all you need. Regardless, here are the steps.
Copy the VHD to a backup container in the same storage account.
Create a new disk from the copy of the VHD.
Create a new virtual machine based off the new disk. You can also specify the size at this step too.
Login and sysprep the new virtual machine.
Shutdown and capture the image of the new virtual machine.
This will get you an image without impacting your existing VM.
There are two types of images: Specialized and Generalized.
You can check the detail in VM Image.
For your scenario, you want to change the size of your vm. So you'll need a Generalized image which has been removed the original provision data, such as vm size, the admin user password, etc.
But in order to capture a Generalized image, you have to do deprovision on the original running vm.
For Windows in Azure:
%windir%\system32\sysprep\sysprep.exe /generalize /shutdown /oobe
For Linux in Azure:
$ sudo waagent –force –deprovision
$ shutdown –h now
Note: After deprovison, the original VM is useless for you and just like an orphan, and you lost the control for it since it has been removed a lots of original provision data. That's why Azure deletes the vm automatically after capturing image successfully.
I agree with you AWS EC2 is more powershell than Azure. A lot of services are inconvenient in Azure.
There is VHD blob container which contains your VM OS disk and VM data disks. You can copy the data disks and attach to any VM.
When you are creating the image, you need to do sysprep which deletes everything from your VM even login. So anyways your VM is of no use. Now once the image is there you can create your VM selecting the image which you have created and data disk if you want old data to be there also. And you can create as many copies as you want.
I currently have a Rackspace Cloud Server that I'd like to migrate to an Azure Virtual Machine. I recently got an MSDN subscription which gives me a certain level of hosting via Azure at no cost, where I'm currently paying for that level of service with Rackspace.
However, one of the nice things about Rackspace is that I can schedule nightly/weekly backups of the VM image. Is there any mechanism for doing this on Azure? I'm worried about protecting against corruption of the database (i.e. what if someone were to run an UPDATE statement and forget the WHERE clause). Is there a mechanism for this with Azure?
I know the VMs are stored as .VHD files in my local Azure storage, but the VM image is 127 gigs. Downloading that nightly even with FIOS internet isn't really going to fly as a solution.
You can perform an asynchronous blob copy to make a physical copy of a vhd. See here for REST API details. This operation is very fast within the same data center (maybe a few seconds?). You don't need to make raw REST calls though: There's a method already implemented in the Azure cross-platform command line interface, available here. The command is:
azure vm disk upload
You can also take blob snapshots, and return to a previous snapshot later. A snapshot is read-only (which you can copy from later) and takes up no space initially. However, as storage pages are changed, the snapshot grows.
One question though: why such a large VM image? Are you storing OS + data on same vhd? If so, it may make more sense to mount a separate Azure Drive (also stored in VHD in blob storage) to store data, and make independent copies / snapshots.