I recently created a SQL Server Virtual Machine on Microsoft Azure. Now I defined my backup jobs and store them on a different Drive. I want to ensure that my backups are safe, meaning on a zone redundant storage.
I heard and read about the storage but I don't understand how to create it and make sure that my SQL Backup is stored their directly.
Is there any other safe option? On AWS you can save your stuff on a Bucket which you can access like a mapped drive... What does Azure offer ?
When you create a storage account you can select the type of redundancy you want.
Even after creating the storage account you change it properties.
You can attach a new disk to a VM you can select any storage account in your subscription. So if your storage account was zone redundant and your created a VHD in that storage account the data in the VHD will be zone redundant.
You can learn about different storage accounts here:
https://azure.microsoft.com/en-us/documentation/articles/storage-introduction/
Which version of SQL Server are you using?
Based on which version of SQL Server you are using you may be able to directly backup from SQL Server to Azure blob storage without saving it to local disk.
Related
I am thinking of using Azure Blob Storage for document management system which I am developing. All Blobs ( images,videos, word/excel/pdf etc) will be stored in Azure Blob storage. As I understand, I need to create container and these files can be stored within the container.
I would like to know how to safeguard against accidental/malicious deletion of the container. If a container is deleted, all the files it contains will be lost. I am trying to figure out how to put backup and recovery mechanism in place for my storage account so that it is always guaranteed that if something happens to a container, I can recover files inside it.
Is there any way provided by Microsoft Azure for such backup and recovery or Do I need explicitly write a code in such a way that files are stored in two separate Blob storage account.
Anyone with access to your storage account's key (primary or secondary; there are two keys for a storage account) can manipulate the storage account in any way they see fit. The only way to ensure nothing happens? Don't give anyone access to the key(s). If you place the storage account within a resource group that only you have permissions on, you'll at least prevent others with access to the subscription from discovering the storage account and accessing it.
Within the subscription itself, you can place a lock on the actual resource (the storage account), so that nobody with access to the subscription accidentally deletes the entire storage account.
Note: with storage account keys, you do have the ability to regenerate the keys at any time. So if you ever suspected a key was compromised, you can perform a re-gen action.
Backups
There are several backup solutions offered for blob storage in case if containers get deleted.more product info can be found here:https://azure.microsoft.com/en-us/services/backup/
Redundancy
If you are concerned about availability, "The data in your Microsoft Azure storage account is always replicated to ensure durability and high availability. Replication copies your data, either within the same data center, or to a second data center, depending on which replication option you choose." , there are several replication options:
Locally redundant storage (LRS)
Zone-redundant storage (ZRS)
Geo-redundant storage (GRS)
Read-access geo-redundant storage (RA-GRS)
More details can be found here:
https://learn.microsoft.com/en-us/azure/storage/common/storage-redundancy
Managing Access
Finally, managing access to your storage account would be the best way to secure and ensure you'll avoid any loss on your data. You can provide read access only if you don't want anyone to delete files,folders etc.. through the use of SAS: Shared Access Signatures, allows you to create policies and provide access based on Read, Write, List, Delete, etc.. A quick GIF demo can be seen here: https://azure.microsoft.com/en-us/updates/manage-stored-access-policies-for-storage-accounts-from-within-the-azure-portal/
We are using blob to store documents and for documents management.
To prevent deletion of the blob, you can now enable soft deletion as described in here:
https://azure.microsoft.com/en-us/blog/soft-delete-for-azure-storage-blobs-ga/
You can also create your own automation around powershell,azcopy to do incremental and full backups.
The last element would be to use RA-GRS blobs where you can read from a secondary blob in read mode in another region in case the data center goes down.
Designing Highly Available Applications using RA-GRS
https://learn.microsoft.com/en-us/azure/storage/common/storage-designing-ha-apps-with-ragrs?toc=%2fazure%2fstorage%2fqueues%2ftoc.json
Use Microsoft's Azure Storage Explorer. It will allow you to download the full contents of blob containers including folders and subfolders with blobs. Conversely, you can upload to containers in the same way. Simple and free!
I've searched the web and contacted technical support yet no one seems to be able to give me a straight answer on whether items in Azure Blob Storage are backed up or not.
What I mean is, do I need to create a twin storage account as a "backup" and program copies of all content from one storage to another, or are the contents of a client's Blob Storage automatically redundantly backed up by Microsoft?
I know with AWS, storage is redundantly backed up via onsite drives as well as across other nodes in the cluster.
do I need to create a twin storage account as a "backup" and program
copies of all content from one storage to another, or are the contents
of a client's Blob Storage automatically redundantly backed up by
Microsoft?
Yes, you will need to do backup manually. Azure Storage does not back up the contents of your storage account automatically.
Azure Storage does provide geo-redundant replication (provided you configure the redundancy level for your storage account as GRS or RA-GRS) but that is not back up. Once you delete content from your primary account (location, it will automatically be removed from secondary account (geo-redundant location).
Both AWS (EBS) and Azure(Blob Storage) options provides durability by replicating the data across different data centers. This is for the high availability and durability of the data to provide the guarantee by the cloud provider.
In order to ensure that your data is durable, Azure Storage has the
ability to keep (and manage) multiple copies of your data. This is
called replication, or sometimes redundancy. When you set up your
storage account, you select a replication type. In most cases, this
setting can be modified after the storage account is set up.
For more details refer the replication section in documentation.
If you need to capture changes to the storage and allow restore to previous versions (e.g In situations like data corruption or application feature requirements like restore points, backups), you need to take a SnapShot manually. This is common for both AWS and Azure.
For more details on creating a Snapshot of Blob in Azure refer the documentation.
When working with a VHD hosted within an Azure Storage account, are there any operations one can perform to access the Storage account directly?
I.e. I create a VM and store it's VHD in a blob in account A, are there any local/efficient ways to work with data in account A from the VM?
See if Azure Storage Files service will work for you. You may attach your storage as a file share and communicate with that directly using traditional APIs.
Apart of that, you may use cross-platform Azure Storage Explorer for communicating with other Storage subservices like Blobs.
We are using Azure Virtual machines to host our application in the cloud.
Couple of virtual machines are hosting web front-end(state-less) and one virtual machine is hosting SQL Server (data is stored in Data Disk).
As we all know, these virtual machines consist of OS Disk and Data Disk(optional) which uses VHD files stored in blob storage. We are using geo-redundant blob storage which stores these VHD files.
We are now planning for disaster recovery for our cloud application. So if a Microsoft data center is down, is it possible to spin up virtual machines in another data center with the help of OS Disk and Data Disk stored in geo-replicated storage?
You are not supposed to use geo-replicated storage with SQL Server data disks. This is documented at https://msdn.microsoft.com/library/azure/dn133149.aspx. Specifically, the document states "When creating a storage account, disable geo-replication as consistent write order across multiple disks is not guaranteed. Instead, consider configuring a SQL Server disaster recovery technology between two Azure data centers".
Currently you can not control if/when Microsoft fails over to the secondary (geo-replicated) storage account. Microsoft controls that.
As I understand it, in the event that Microsoft does declare a disaster and fails over, then your VMs would still work. Perhaps you'd have to create the VM again from the VHD, but the data would be there (minus anything lost since the last sync to storage).
Planning on using Azure's VMs to host SQL and IIS. Not using local storage but the geo-redundant storage.
What's the best solution to backup this environment? Copy the VHDs locally?
I'm planning on transferring the drives into something I could mount in Hyper-V? Is that possible? Happy to buy a product if required.
The persistent disks of a VM are stored in blob storage. This means you can leverage features like taking snapshots of these disks (blobs).
In order to create snapshots you can use the REST API, the .NET SDK or even Cerebrata’s Cloud Storage Studio. If you ever need the backup you can download the snapshot and mount it in Hyper-V.