Azure VM: Single disk (filesystem) greater than 1023 GB? - azure

I'm using Azure Virtual Machines, specifically Linux. I went to add a blank disk ("attach...blank disk" in the portal) and discovered that Azure only allows a maximum size of 1023GB for disks. The portal won't allow you to specify a size beyond 1023GB.
What I'm looking for is a 4TB filesystem. The disks present themselves as /dev/xd?. I'm wondering if I could take four 1TB disks and stripe them (RAID 0) in the OS? If they're SAN disks then I'm not concerned about the redundancy since presumably they're already protected. I admit it sounds kind of hokey.
Is there another option to get bigger disks in Azure?
To be clear, I want persistent storage, not the ephemeral /mnt/storage.

You are correct. You need 4 disks in Raid0 to get 4TB of data. You can follow the guide below; just make sure to change parameters accordingly because the guide uses 3 disks only.
Configure Software RAID on Linux
https://azure.microsoft.com/en-us/documentation/articles/virtual-machines-linux-configure-raid/
Regarding redundancy, no matter what kind of storage you configured in Azure, the worst you can get is 3 mirrors for each disk so just go for full performance.
Azure Storage Replication
https://azure.microsoft.com/en-us/documentation/articles/storage-redundancy/

For Windows you may use Storage Spaces
http://blogs.msdn.com/b/dfurman/archive/2014/04/27/using-storage-spaces-on-an-azure-vm-cluster-for-sql-server-storage.aspx
https://technet.microsoft.com/en-us/library/hh831739.aspx

Related

Azure site recovery- Disk size

I have a scenario in which I need to fail over SQL server that is on-premises with disks larger than 2 TB. I know ASR does not support that. So, I am trying to find out if I could possibly do a workaround by striping the disks etc.
For ex: I could do a striping of disks on the on-prem machine and then do a failover. However that would require me to pull the machine off line which I cannot afford to.
So, please let me know of a possible workaround or if there is a feature in ASR that I am not aware of.
Thanks in advance
You could use either Dynamic disks or Storage Spaces to create a scenario where you replicate a volume of over 1023GB in size. The only problem is, the single disk cannot be larger than 1023GB and you would have to honor the maximum capacity of the VM (the VM's in Azure are limited by the amount of data disks you can attach).

Can you move/copy Azure virtual machines to a different instance?

If I setup a server running my application on an azure instance, for example A1 can I later change the instance to D2?
I might want to experiment with a VM at a lower cost but then move to a higher performing machine at a later date without having to rebuild everything.
Yes, you can change the size of Azure VM on demand. Changing the size will trigger a machine reboot and if you're using a configuration with SSD temporary drive, the content of the SSD will get erased. Other than that, everything else will be left untouched.
Drew, the Principal PM in this area has a great blog here about this.
You can only resize a VM to another offering that does not have fundamentally different hardware. Since A-Series and D-Series VMs have similar hardware, you would be able to swap those two around. You would not be able to go from A-Series to G-Series though. In addition you need to look at VM availability per region if you want to swap to something only in certain areas, as well as look at if you are using an ASM or ARM VM.
If you have an existing VM, you can check what it can swap out with in the new portal under "Size" in the VM Settings.
This will allow you to reboot into a different machine type, however any temp storage will be erased as with any VM reboot. You just need to ensure you are storing your persistent data in external storage.
You can learn more about the VM size offerings here.

Azure: How to add >1TB disks to a virtual machine without changing the size of VM

I see there are some limitations on Azure:
1. On number of disks to be attached to VM;
2. The size of each disk/storage blob is limited by 1TB;
Is there any hack or workaround to attach larger disks/several disks to the same VM without increasing the processing power of VM as my application doesn't need high computing capacities, but needs plenty of space.
May be it's possible by contacting their billing department?
Currently I'm using A1 Standard VM instance with 2 disks (2 TB it total) attached to it already. The goal is to attach 5 TB total disk space to the same VM without upgrading the VM size to a larger instance.
You will need to change your VM size to attach more disks. One option is to look at Basic tier instead of using Standard tier A Series VMs to optimize your cost. Since you do not need a lot of computing power, basic tier VMs may work fine for you. You will want to look at Basic A3 which will allow you to attach 8 maximum data disks of 1 TB each. See more information here (https://azure.microsoft.com/en-us/documentation/articles/virtual-machines-size-specs/)
Thanks,
Aung
I found a solution to attach 5TB folders as Azure File Sharing service.
It's possible via creating File Sharing containers through Azure Portal, then mounting the folder under Linux via CIFS (SMB3.0).
For those who are interested, there is an issue with mounting Windows File Sharing folders within CentOS 6.X under Azure. It works only with CentOS 7.X (keep it in mind).
You can use Storage Spaces in Azure to increment capacity and performance. The limit of the VHD is 1 TB per disk, using Storage Spaces you can pass this limitation. You need to have in mind that there is a limitation of disk to attach to the VM based on type you choose.
Sample explanation on:
https://blogs.msdn.microsoft.com/dfurman/2014/04/27/using-storage-spaces-on-an-azure-vm-cluster-for-sql-server-storage/

How do I mount a large NTFS volume with Azure?

I have a legacy application and third party software that both require NTFS volumes to operate. Changing the software would be a last resort.
The requirement is to have a central storage location for media (videos, images, etc) that each computer in a domain can access. The size requirement can be as high at 20 Terabytes.
My proposed solution is to create a domain and one of these computers to act as a simple file server with multiple volumes mounted and accessible from the other computers through DFS (Distributed File System). The reason why DFS is in the picture is we are looking to expand the DFS service to provide redundancy.
Is my proposed solution viable? I am willing to accept that I should be evaluating other storage/hosting solutions other than Azure that will allow me to meet the requirement.
Your best bet might be Windows Azure Virtual machines. In this model, an extra large virtual machine can mount 16 separate 1tb data drives. You'd have to combine multiple virtual machines to reach your 20tb requirements.
It sounds like a reasonable solution.
Using Windows Azure Drives will give you NTFS.
Azure Drives are stored as virtual harddisks (VHD) in Blob Storage. I believe 1 drive can max contain 1 TB of data (a Blob store limitation), so you will have to mount multiple drives.
This is an interesting article on sharing drives across multiple Role instances via SMB. Admittedly, I have not tried this myself.

90 days test Iaas Azure offer: how are calculated the costs when a VM is stopped

I have been doing some test and realized that when I stop a VM, I get a red warning saying that it still generates charges.
But on which basis ?
Furthermore, on some VM I created, the system without any reason starts fooling and reach a 98% CPU during several hours with no way to stop it or to connect with RDP. VM was totally dead and it's only after several hours that the stop command from the control panel succeeded.
Hope I will not been charged for this ? Who is able to decide if my VM is OK or fooling like a crazy horse ?
Moreover, is there any software allowing to transfer my VMs from Azure to my local system, and delete them on Azure to stop any charge ? for a simple backup with possibility to restore/restart them later ? Or to run them in my own hyper-V ?
Best regards
CS
Even if your VM is stopped, you still have resources that have been reserved for your VM (think of storage space, memory, CPU, ...) and these can't be 'sold' to anyone else. Deleting the VM will free these resoures and you'll no longer be charged.
Remember that Virtual Machines are still in preview, meaning things can go bad sometimes. And yes you'll be charged for this, but during the preview you get a 33% discount (more info here: https://www.windowsazure.com/en-us/pricing/details/).
The persistent disks of your VMs are stored in a storage account as page blobs. Using tools like Azure Storage Explorer, CloudXplorer, CloudBerry, ... you can download these VHD files and simply mount them in Hyper-V (You'll need to remember that you'll need a license if you want to run the machine on-premises).
Note that, if you simply delete the VM the disks won't be deleted (they will stay in your storage account). In that case you only pay for storage (which is very cheap).
The price of VM depends on their size and nature (prenium or not).
Also you have to pay for the storage, but a 120GB disks is not billed fully, only effectively used space is.
You can use IaaS Managament Studio to easily calculate how much your blob disk cost, and see links to pricing pages of azure.

Resources