I have a scenario in which I need to fail over SQL server that is on-premises with disks larger than 2 TB. I know ASR does not support that. So, I am trying to find out if I could possibly do a workaround by striping the disks etc.
For ex: I could do a striping of disks on the on-prem machine and then do a failover. However that would require me to pull the machine off line which I cannot afford to.
So, please let me know of a possible workaround or if there is a feature in ASR that I am not aware of.
Thanks in advance
You could use either Dynamic disks or Storage Spaces to create a scenario where you replicate a volume of over 1023GB in size. The only problem is, the single disk cannot be larger than 1023GB and you would have to honor the maximum capacity of the VM (the VM's in Azure are limited by the amount of data disks you can attach).
Related
We have a VM using premium managed drives that is also replicated to another azure data center using azure site recovery. I am aware of how to convert the premium drives to standard by deallocating the vm and changing the drive type. However I suspect I will need to stop and remove the disaster recovery replication and reinitializing vm replication resulting in the loss of all previous recovery points.
Does anyone know for sure and what the process to convert the disks given VM replication would be.
Thx.
You need to reach out to the right Azure Support Team. The Azure Site Recovery support team should be able to give you the correct information, they handle disaster recovery replication scenario.
You want to make sure you are getting vetted information on issues like this.
If I setup a server running my application on an azure instance, for example A1 can I later change the instance to D2?
I might want to experiment with a VM at a lower cost but then move to a higher performing machine at a later date without having to rebuild everything.
Yes, you can change the size of Azure VM on demand. Changing the size will trigger a machine reboot and if you're using a configuration with SSD temporary drive, the content of the SSD will get erased. Other than that, everything else will be left untouched.
Drew, the Principal PM in this area has a great blog here about this.
You can only resize a VM to another offering that does not have fundamentally different hardware. Since A-Series and D-Series VMs have similar hardware, you would be able to swap those two around. You would not be able to go from A-Series to G-Series though. In addition you need to look at VM availability per region if you want to swap to something only in certain areas, as well as look at if you are using an ASM or ARM VM.
If you have an existing VM, you can check what it can swap out with in the new portal under "Size" in the VM Settings.
This will allow you to reboot into a different machine type, however any temp storage will be erased as with any VM reboot. You just need to ensure you are storing your persistent data in external storage.
You can learn more about the VM size offerings here.
I see there are some limitations on Azure:
1. On number of disks to be attached to VM;
2. The size of each disk/storage blob is limited by 1TB;
Is there any hack or workaround to attach larger disks/several disks to the same VM without increasing the processing power of VM as my application doesn't need high computing capacities, but needs plenty of space.
May be it's possible by contacting their billing department?
Currently I'm using A1 Standard VM instance with 2 disks (2 TB it total) attached to it already. The goal is to attach 5 TB total disk space to the same VM without upgrading the VM size to a larger instance.
You will need to change your VM size to attach more disks. One option is to look at Basic tier instead of using Standard tier A Series VMs to optimize your cost. Since you do not need a lot of computing power, basic tier VMs may work fine for you. You will want to look at Basic A3 which will allow you to attach 8 maximum data disks of 1 TB each. See more information here (https://azure.microsoft.com/en-us/documentation/articles/virtual-machines-size-specs/)
Thanks,
Aung
I found a solution to attach 5TB folders as Azure File Sharing service.
It's possible via creating File Sharing containers through Azure Portal, then mounting the folder under Linux via CIFS (SMB3.0).
For those who are interested, there is an issue with mounting Windows File Sharing folders within CentOS 6.X under Azure. It works only with CentOS 7.X (keep it in mind).
You can use Storage Spaces in Azure to increment capacity and performance. The limit of the VHD is 1 TB per disk, using Storage Spaces you can pass this limitation. You need to have in mind that there is a limitation of disk to attach to the VM based on type you choose.
Sample explanation on:
https://blogs.msdn.microsoft.com/dfurman/2014/04/27/using-storage-spaces-on-an-azure-vm-cluster-for-sql-server-storage/
I'm using Azure Virtual Machines, specifically Linux. I went to add a blank disk ("attach...blank disk" in the portal) and discovered that Azure only allows a maximum size of 1023GB for disks. The portal won't allow you to specify a size beyond 1023GB.
What I'm looking for is a 4TB filesystem. The disks present themselves as /dev/xd?. I'm wondering if I could take four 1TB disks and stripe them (RAID 0) in the OS? If they're SAN disks then I'm not concerned about the redundancy since presumably they're already protected. I admit it sounds kind of hokey.
Is there another option to get bigger disks in Azure?
To be clear, I want persistent storage, not the ephemeral /mnt/storage.
You are correct. You need 4 disks in Raid0 to get 4TB of data. You can follow the guide below; just make sure to change parameters accordingly because the guide uses 3 disks only.
Configure Software RAID on Linux
https://azure.microsoft.com/en-us/documentation/articles/virtual-machines-linux-configure-raid/
Regarding redundancy, no matter what kind of storage you configured in Azure, the worst you can get is 3 mirrors for each disk so just go for full performance.
Azure Storage Replication
https://azure.microsoft.com/en-us/documentation/articles/storage-redundancy/
For Windows you may use Storage Spaces
http://blogs.msdn.com/b/dfurman/archive/2014/04/27/using-storage-spaces-on-an-azure-vm-cluster-for-sql-server-storage.aspx
https://technet.microsoft.com/en-us/library/hh831739.aspx
I'm currently looking into a high-availability approach for a file server within Azure in which I will need to be deploying VMs for. The data on the file server will be constantly changing. From what I read so far, I will need at least 2 VMs and grouping them together into a shared availability set along with creating a cloud service. Although this will address the application and server aspect, what about the storage and the data on them?
I understand that I can't attach a single disk to multiple VMs so I'm a bit lost on how to proceed with this. Any thoughts or ideas on how to move forward with this?
In short, I have a VM with direct data disk attached to it that I'm looking to provide high-availability in the event that the VM goes offline; either through an outage, host patching, hardware maintenance, etc...
Have a look into Azure Blob Storage - don't worry about disks etc, just let the Azure fabric handle the data redundancy and scalability for you!
Here's an "all you need" introduction to WIndows Azure Storage: