Is there any information out there on how to backup and restore the data in the Azure Storage Emulator? (Notice this is not the live version of Azure.)
We are developing several different solutions against Windows Azure Storage but since the storage emulator is basically just "one storage account" we need to separate the data per project. Are there easy/handy ways to do this without having to manually extract & put back all the data?
Storage emulator data is stored in a local database. I'd assume that could be backed up and restored as needed. But, never tried personally.
Related
There are a bunch of ways to manually sync blobs inside an Azure Storage account to a local file-system folder.
One way might be to use AzCopy to download all blobs of a container, and do it for all containers in the account. Of course this can't be scaled, and is only good for one-time operation, or an ad-hoc snapshot.
Another option is to use Blob events, and manually sync each blob once with the local file-system folder. This method is not available in all regions yet, and can't be trusted for long-term operation, since if for any reason they get out of sync, then they remain out of sync.
Is there a way to mirror an entire Azure Storage account, to a local folder?
Here are several ways you could follow:
The way of doing this is especially fast if few files has been added/updated/deleted. If many is added/updated/deleted it is still fast, but uploading/downloadig files to/from the blob will be the main time factor. The algorithm auditor going to describe was developed by me when he was implementing a non-live-editing Window Azure deployment model for Composite C1.
Refer to fast recursive local folder to/from azure blob.
Also, you could install GoodSync to sync files between their local and the cloud.
And you will need to install Gladinet Cloud Desktop 3.0 and mount your Azure Blob Storage account.When you click on the "Add New Cloud Sync Folder" link above, you will see this dialog.
I am trying to upgrade an Azure DB in a continuous release scenario. The DB lives in SQL Azure and its size keeps growing. Now it's about > 50G. In my previous on-premise experience, I usually backup the old DB in a compressed format and save it to an on-premise file sever. In case the upgrade fail, I can restore it safely.
But with SQL Azure, I am not sure if it's OK to download such a big DB from SQL Azure. And is there any best practice for the SQL Azure DB upgrade scenario?
ADD
I found this link regarding different SQL Azure backup strategies. But it'll be great if someone can share some field experiences.
Azure now has automatic exports (aka full backups) to blob storage that you can schedule. The .bacpac files are complete compressed copies of your database and blob storage is pretty cheap. To give you an idea of size we have a 20GB database that is backed up to only 500 MB. We typically keep 14 days of backups but how long to retain them is up to your needs.
It's kind of like the Ron Popeil Rotisserie. You just set it, and forget it.
Obviously after you take a backup you want to restore it somewhere else to verify it worked. It's also a good idea to periodically restore your backups to make sure they working over time. You can do all of this in the Azure Portal. Just create a new database based on a .bacpac file that you created from the automated export.
You actually don't have to download the DB on premise unless you want another copy locally. Because if you are using geo-redundant blob storage its already copied to another region and you have 6 copies in total. But again its up to you.
When you log into the management portal navigate to the Sql Database tab. Click on your DB and then click configure. There you can set up automated backups for your db to blob storage.
The path on the management portal looks like this:
https://manage.windowsazure.com/mycompany.com#Workspaces/SqlAzureExtension/SqlServer/coolazuredb/Database/5.coolazuredb/Config
Here is a screenshot of the automated export section:
We are new to Windows azure and are developing a web application. In the beginning of the project , we have deployed complete code to different environments which actually publish complete code and uploaded blob objects to azure storage as we linked sitefinity to hold blob objects in azure storage . But now as we are in the middle of development , we are just required to upload any new blob files created which can be quite less in numbers (1 or 2 or maybe few).Now I would like to know best process to sync these blob files to different azure storage environments which is for each cloud service. So ideally we would like to update staging cloud service and staging storage first and then test there and then once no bugs are found, then will be required to update UAT and production storages as well with the changed or new blob objects.
Please help.
You can use the Azure Storage Explorer to manually upload/download blobs from storage accounts very easily. For one or two blobs, this would be an easy solution, otherwise you will need to write a tool that connects to the blob storage via an API and does the copying for you.
I am running a VM in Windows Azure. It has two disks attached to it (OS 40GB and DATA 60GB).
In addition to my two VHDs, the Storage has one more 40GB VHD named dmzvyyq2.jja20130312104458.vhd.
I would like to know where this VHD came from and what is using it. Surprisingly the 'LAST MODIFIED' date is yesterday so something must have updated it. I went through all options in the Portal but nothing seems to have this VHD attached.
Ultimately I would like to delete this VHD to save storage space and cost.
One possibility to find out this is by using Storage Analytics. If you have storage analytics enabled, you can view the contents of $logs blob container, download data for the data in question and check for all the activity on this particular blob. You can use a tool like Azure Management Studio from Cerebrata to view storage analytics data. However if you haven't enabled analytics on your storage account, it would be very tough to find out that information.
I currently have a Rackspace Cloud Server that I'd like to migrate to an Azure Virtual Machine. I recently got an MSDN subscription which gives me a certain level of hosting via Azure at no cost, where I'm currently paying for that level of service with Rackspace.
However, one of the nice things about Rackspace is that I can schedule nightly/weekly backups of the VM image. Is there any mechanism for doing this on Azure? I'm worried about protecting against corruption of the database (i.e. what if someone were to run an UPDATE statement and forget the WHERE clause). Is there a mechanism for this with Azure?
I know the VMs are stored as .VHD files in my local Azure storage, but the VM image is 127 gigs. Downloading that nightly even with FIOS internet isn't really going to fly as a solution.
You can perform an asynchronous blob copy to make a physical copy of a vhd. See here for REST API details. This operation is very fast within the same data center (maybe a few seconds?). You don't need to make raw REST calls though: There's a method already implemented in the Azure cross-platform command line interface, available here. The command is:
azure vm disk upload
You can also take blob snapshots, and return to a previous snapshot later. A snapshot is read-only (which you can copy from later) and takes up no space initially. However, as storage pages are changed, the snapshot grows.
One question though: why such a large VM image? Are you storing OS + data on same vhd? If so, it may make more sense to mount a separate Azure Drive (also stored in VHD in blob storage) to store data, and make independent copies / snapshots.