Azure storage vhds - azure

Could someone please help me understand this? I created Virtual Machine in Azure running Windows Server 2012. I noticed Azure created a storage account automatically. When I go inside that storage account, click Containers tab, and under vhds name it shows a name-name2-2014-12-05.vhd which is 127 GB and it always has recent Last modified date. What is that for? Is that my live backup image of my entire server deployment? If so where can I see how often it backs up?

When I go inside that storage account, click Containers tab, and under
vhds name it shows a name-name2-2014-12-05.vhd which is 127 GB and it
always has recent Last modified date. What is that for?
Virtual Machines in Azure are Stateful in nature. What that means is that any changes you make to the Virtual Machines like installing software, creating files etc. are persisted. The way Azure achieves this is by storing the Virtual Machine VHD as a page blob in Azure Storage. What you see as name-name2-2014-12-05.vhd is the VHD using which Azure launches your VM.
Is that my live backup image of my entire server deployment?
It is your VM and not the backup image. If by mistake you delete it (though Azure makes it real hard for you to delete it but its possible), your VM is gone. If you want, you can take a backup of this and store it in some other place. Search for Create Azure Virtual Machine Images and you will find ample resources.
If so where can I see how often it backs up?
By default Azure keeps 2 extra copies (a total of 3 including the main) of it in the data center and if you have enabled geo-redundancy, then Azure keeps additional 3 copies in a separate datacenter. However please keep in mind that it is not a backup. Any changes you make to your VM are replicated to all the copies. You would need to come up with your backup approach.
My recommendation would be to read more about Azure Virtual Machines. I'm sure if you search for it, you will get plentiful of resources.

Related

Easy backup and restore VM's when using Azure

I have a couple of VMs that I want to backup and restore easily and often. Preferably as a group.
I have tried default Azure backup and restore but noticed that it doesn't seem to do much. It is easy to create and schedule backups but it is not clear to me how these backups can be used to bring a VM back to its original state.
The use cases for the default backup / restore seem to be very different from what I expected. I expected something somewhat similar to VirtualBox: take a snapshot and then restore takes the VM back to the snapshot.
Restore of VM in Azure does not seem to be a supported use case. I think the idea is more to clone / duplicate the machine.
The default "restore" is a feature to create another VM because it you try to restore Azure shows an error message
A virtual machine with the same already exists in the selected resource group. Please change the virtual machine name or select a new resource group.
There is an option to restore disks. This seems to work at first. Restore job completes successfully but nothing it restored. The file system is the same as before restore.
There is no detailed log so there is no way to determine what is happening. There is only exit status: success restore completed successfully.
Are there other ways to mimic VirtualBox functionality? Take snapshots and restore VM's using such snapshots?
Does MS have plans to enhance backup in such way that it also supports restore?
Snapshot works by capturing image state of a virtual machine. In Azure, you can snapshot your VHD and restore that snapshot whenever you'd like. Get started with snapshot here https://learn.microsoft.com/en-us/azure/virtual-machines/windows/snapshot-copy-managed-disk
Disk snapshot works best in case your virtual machine uses one disk (for the OS). Or if that virtual machine has more than one disk, disk snapshot is still useful, but takes time and may result disk management overhead. In that case, you could go with image generalization to capturing the whole virtual machine's state at once using tool like sysprep (https://learn.microsoft.com/en-us/azure/virtual-machines/windows/capture-image-resource)
From your screenshot, the problem is exactly what is highlighted in red. You cannot overwrite your virtual machine to the existing one in the same resource group. You must restore it somewhere else.
Azure Site Recovery and Backup is designed to work with large deployment of virtual machines, with some capabilities of automation and disaster recovery.

I would like to change Microsoft Azure Virtual Machine size without losing my data

I am using two Microsoft Azure Virtual Machines (marked as classic), both running on Linux. One is used for test purposes and internal demos, the other is production and running few of clients' instances.
What I would like to do is change the size of Virtual Machine. I understand this is quite common process and can easily be done from the Azure Management Portal and that this is not affecting data. However, when I have changed the size of our testing machine, exactly this has happened and we have lost all data.
Azure Support answer received was:
"We recommend you delete the VM by keeping the attached disks and create a new VM with the required size." Not sure why this would be better?
Any data stored on the ephemeral (internal-to-chassis) scratch disk is at risk, as it's a non-durable disk (and will in all likelihood be destroyed/recreated upon resizing a VM).
The only way to have durable data is to use Azure Storage (blobs, vhd as attached disk, Azure File storage) or external database. Azure Storage is durable (minimum 3 copies), and is not stored with your VM.
One more thing: The VM's OS Disk is a VHD in Azure Storage (so the OS disk is durable, just like attached vhd's).
You have more than one way to do that and keep in mind what David said, data on OS disks, attached disks and blobs is the only durable one.
To prevent losing data and since you're using Classic VMs, you can do the following:
1- Go to your VM on portal and capture an image out of it.
2- Go to your new image and create a new VM out of it, while specifying the new specs that you need.
3- When done, connect to your new VM while keeping the old one without termination.
4- Check if all your data is there, if yes, then you can remove the old one. (In case you need the old IP, you can still assign it to the new one).
Cheers.

Will my files will be lost if I stop and restart my Microsoft Azure Virtual Machine

Will my files and database will be lost if I stop or restart or my VM get crashed certainly.
Can the files created at VM be stored in my computer hard disk so that I can retrieve them in future, if I need.
If your VM get crashed you will not be able to access your VM as well as your data but that doesn't mean you will loss your data. Your data will be stored there in the blob storage.
What you need to do is- attach the blob storage properly to some other vm or new vm to access it again.
As previously mentioned this was answered on another thread, the best thing to do is to download the VHD locally.
From the Windows Azure Portal you can easily download the VHD. Just navigate to STORAGE and then the storage account in which your virtual disk is created. Select CONTAINERS (at the top), open the container named "vhds". Just click the vhd you want and select DOWNLOAD (at the bottom of the page).
Have a great day

Backup Microsoft Azure Virtual Machine

I currently have a Rackspace Cloud Server that I'd like to migrate to an Azure Virtual Machine. I recently got an MSDN subscription which gives me a certain level of hosting via Azure at no cost, where I'm currently paying for that level of service with Rackspace.
However, one of the nice things about Rackspace is that I can schedule nightly/weekly backups of the VM image. Is there any mechanism for doing this on Azure? I'm worried about protecting against corruption of the database (i.e. what if someone were to run an UPDATE statement and forget the WHERE clause). Is there a mechanism for this with Azure?
I know the VMs are stored as .VHD files in my local Azure storage, but the VM image is 127 gigs. Downloading that nightly even with FIOS internet isn't really going to fly as a solution.
You can perform an asynchronous blob copy to make a physical copy of a vhd. See here for REST API details. This operation is very fast within the same data center (maybe a few seconds?). You don't need to make raw REST calls though: There's a method already implemented in the Azure cross-platform command line interface, available here. The command is:
azure vm disk upload
You can also take blob snapshots, and return to a previous snapshot later. A snapshot is read-only (which you can copy from later) and takes up no space initially. However, as storage pages are changed, the snapshot grows.
One question though: why such a large VM image? Are you storing OS + data on same vhd? If so, it may make more sense to mount a separate Azure Drive (also stored in VHD in blob storage) to store data, and make independent copies / snapshots.

Backup Azure Virtual Machine local folders to blob storage?

I've just setup an extra small VM instance in Windows Azure to run a help console for our company. The help files can be updated and published through a simple .NET interface. Obviously the flat html files are getting deployed to the local drive on the VM and exposed publicly through IIS. I'm just wondering how stable this is? If the VM suffers a hardware failure, presumably there's no automatic failover and any edits we've made to the help system will be lost?
Can anyone recommend a way I can shuttle the source files out of the VM into blob storage? I could write a an application to do this, I'm just wondering if there is an out-of-the-box solution out there?
Additional information:
The VM instance is running Server 2008 R2 SP1 (As a Virtual Machine not a web-role)
A backup needs to be created once every 24 hours
Aged backups (3+ days old) need to be automatically cleared from the blob container
The help system we use is called HelpConsole 2012
New pages are added at a rate of myabe 2-3 per week
The answer depends on how whether you are running this in a Windows Azure Virtual Machine or on a Windows Azure Web role.
If you are running this on a Windows Azure Virtual Machine, then the VHD is stored in BLOB storage and, if the site is running of the C: drive and not on a data Disk, then the system has some Host caching turned on for both reads and writes. In this scenario it is possible (depending on the methods you use to write your files out) that the data is not pushed back to the VHD in BLOB storage before a failure occurs. You can either ensure that your writing methods do a write through operation, or turn off the write caching. Better yet, attach a data disk for your web site files. By default data disks have both read and write caching off (you could turn on read caching). Since the VHD's are persisted you don't have to worry about the concern of the edits getting lost. You can script out taking a snapshot of the files and move them to BLOB storage separately, or even push them somewhere else. Another thing to think about with this option is that you have to care for the VM instances and keep them patched and up to date.
If you are running a Web Role, then yes, if a failure occurs and the VM goes through self healing it will indeed redeploy with the older files. In this case I'd recommend changing the code in the web role that when it writes the updates to the local file it also puts a copy of the local file into BLOB Storage. In addition, in the web role OnStart you could reach out to BLOB storage and pull down all the new content locally. BE VERY CAREFUL with this approach though because it only really works well for ONE instance, not multiple. If you plan on running multiple instances of the server (and you will have to if you want the SLA for uptime) then your code will need to be a little more robust and do writes out to BLOB storage and then alert all instances of the role that there is a new file to pull down locally.
Another option for web roles is to also write a handler for the content so that requests come in and are mapped to a file BLOB Storage directly. Then updates can occur to direct edits to the file in BLOB storage. This offloads the serving of the flat files from your compute nodes to BLOB storage and you could even implement some caching and stream the content back through the handler rather than having them hit BLOB storage directly if you wanted to.
Now, another option, is to use Windows Azure Web Sites for this. The underlying storage of the web site files in Windows Azure Web Sites is a shared location and thus updating the files in it will immediately be reflected for all instances. Also, the content for the site is stored in BLOB storage and can be updated via FTP, source control, or directly from code. Lots of options here. You may end up moving to reserved instances to help keep away from some of the quotas that Web Sites have. Web Sites may not be an option for you currently depending on other requirements (as in how much control do you need over the environment since you don't get a lot of control for Web Sites).

Resources