Azure VHD blobs larger than 1TB - azure

Azure storage supports blobs up to 1 TB in size, which accommodates a VHD with a maximum virtual size of 999 GB.
I want to create a VHD for your database system that uses several blobs to accommodate databases larger than 1 TB, is this possible? And if so, then how can I configure a single VHD to use different blobs?

Win2012? From azure portal, create and attach N disks (1 TB each?) to you virtual machine. Then, from win2012 create a StoregePool and create a virtual disk on it. The resulting size is the sum of disks size if you choose the "Simple" layout (or less if you choose Mirror or Parity). Details: http://blogs.technet.com/b/yungchou/archive/2012/08/31/windows-server-2012-storage-virtualization-explained.aspx

From this blog post: http://blogs.msdn.com/b/windowsazure/archive/2013/06/04/the-top-10-things-to-know-when-running-sql-server-workloads-on-windows-azure-virtual-machines.aspx
A data disk can be up to 1 TB in size, and you can have a maximum of
16 drives on an A4 or larger VM. If your database is larger than 1 TB,
you can use SQL Server file groups to spread your database across
multiple data disks. Alternatively, you can combine multiple data
disks into a single large volume using Storage Spaces in Windows
Server 2012. Storage Spaces are better than legacy OS striping
technologies because they work well with the append-only nature of
Windows Azure Storage.

Related

Azure Batch DataDisk vs Mounted Virtual File System

In Azure Batch when creating a pool in the portal you can create a DataDisk and set it's size in GB as well as choose between Standard LRS and Premium LRS.
When using Powershell and/or the .NET libraries you can also set up a MountConfiguration to a FileShare (as well as Blobs, etc).
I'm confused as to what the difference is between the two. Specifically between a DataDisk and a Mounted FileShare.
For my scenario I want to use the lowest powered Linux VM possible but need at least 500GB of storage isolated to each node (no need for sharing across nodes).
I added a DataDisk to my pool since it seemed simpler than mounting a FileShare but my nodes do not have access to the additional file storage. Are there additional configurations that need to be made to the job or task? Does it need to be mounted to a drive letter like a FileShare does?
If I add a 500GB DataDisk to my pool is that shared across all the nodes that are running or does each new node get their own 500GB partition?
There does not seem to be much documentation on DataDisks for Azure Batch. In fact searching for the term within the Batch documentation has 0 results!
• When you add a data disk of a particular size to a batch pool, it is added to all the nodes existing or created in that batch pool, i.e., if you are adding a data disk of 500 GB to a batch pool and you created 4 nodes in that pool, then all the 4 nodes will be attached with a data disk of 500 GB individually. If these nodes are Linux VMs, then they will be attached with the data disk individually and you need to initialize these data disks from within the VM. To mount the disks and partition them, please follow the below documentation: -
https://learn.microsoft.com/en-us/azure/virtual-machines/linux/attach-disk-portal#connect-to-the-linux-vm-to-mount-the-new-disk
By following the above documentation, you will be able to mount these data disks individually to all the nodes from within the VM.
• When you add a data disk in a VM, you won’t be able to see them until you initialize them or format them from within the VM, thus you will need to login to every node and then partition it or initialize the disk for it to be visible and used.
Data disks are dedicated storage spaces or attached disks to a system/VM which can be shared with another resource likewise unless enabled but File shares are network mounted and partitioned storage volumes that are available over the network to all provisioned resources/VMs/systems. File shares like data disks have a fixed disk space/size but it is shared equally amongst the shared resources unless quota is allocated to each resource accessing the file share.
The above is same for nodes in an Azure batch pool also.
Please find the below links for your reference: -
https://learn.microsoft.com/en-us/azure/batch/virtual-file-mount?tabs=linux

Resizing Disk on Azure Virtual Machine with Storage Pool

I own a virtual machine (classic) on Azure that uses 4 data disks, each making 50 GB. These disks are grouped together in a storage pool.
Is it possible to increase the size (up to 100 GB for example) of disks despite the storage pool ?
I have already made a large increase on a single disk successfully (with powershell) but never on a storage pool.
I want to be sure that there is no danger to the data currently on the disks.
Thanks for your help.
Based on my knowledge, it is not possible to increase physical disk to expand storage pool.
As a workaround, maybe you can resize your Azure VM to a high size, then extend the pool by adding more physical disks.
You can use PowerShell to add physical disk Add-PhysicalDisk.
$toadd = Get-PhysicalDisk -FriendlyName "Msft Virtual Disk"
Add-PhysicalDisk -StoragePoolFriendlyName poolname -PhysicalDisks $toadd

azure blob premium storage vs standard storage

I want to use the premium storage for better performance.
I am using it for BLOBS and i need the fastest blob access for reading.
I am using the reading and writing of the blobs only internally within the data center
I create a premium storage and checked it vs the standard storage by reading a blob of 10 MB 100 times in different location using seek method (reading 50 kb each time).
I read it using a VM machine with windows server 2012
the result are the same - around 200 ms.
Do i need to do something else ? like attach the storage ? if so how do i attach the storage.
both the vm and the storage are at the same region
You can use Premium Storage blobs directly via the REST API. Performance will be better that Standard Storage blobs. Perf difference may not be obvious in some cases if there is local caching on the application or when the blob is too small. Here 10MB blob size is tiny compared to the performance limits. Can you retry with a larger blob? Like, 10 GB? Also note that Premium Storage model is not optimized for tiny blobs.
Well, in Virtual machine cases it always rely on your main Physical HDD, unless you will used that premium storage it's plus but i think internet connection matters as well.
By default, there is a temporary storage(SSD) provided with each VM. This temporary storage drive is present on the physical machine which is hosting your VM and hence can have higher IOPS and lower latency when compared to the persistent storage like data disk.
For test, we can create a VM with HDD disk, and attach a SSD to this VM. After it complete, we can install some tools to measure disk performance, in this way, we can find the difference between HDD and SSD.
like attach the storage ? if so how do i attach the storage.
We can via Azure new portal to attach a SSD to this VM.
More information about attach disk to VM, please refer to this link.

When creating virtual machine in Azure with template Sql Server 2014 SP1 Web on Windows Server 2012 R2, a 1TB premium disk is always attached

When creating the VM I'm asked about Storage configuration. When I select IOPS=0 (the minimum is otherwise 5000), Throughput=0 and Storage size=0, the info text is
0 data disks will be added to the virtual machine. This value was computed based on the value of IOPS, throughput, and storage size.
When the VM is created and I go to the Storage account, select Blobs and Container named vhds I see two disks, one 127GB and one 1TB disk.
Since the 1TB premium disks costs >100€/month I don't want that.
I tried removing the disk from a created machine but when I tried to add a new I got the error that "LUN :0 is already in use".
Preferably I would like to create machine correctly from the start. How can I do that?
This is correct. The current SQL Server IaaS experience on Azure Portal would creates one disk of 1TB even if specify 0 IOPS. We will add a fix to ensure the user cannot specify IOPS below 1 TB disk. If you need SQLVM without disks or any other configurations, you may use Azure PowerShell to create the VM.

10 Terabytes of storage - Microsoft Azure Linux VM

I want to create a linux VM that can accomodate up to 10 terabytes of data. Not sure how to accomplish that on Microsoft Azure or if it is even possible. Any insight would be appreciated.
You create the storage separately from the VM - they have a data servuce known as BLOB service in Azure
Here is a link http://azure.microsoft.com/en-us/documentation/articles/storage-dotnet-how-to-use-blobs/
supporting many terrabytes is not a problem
You create one Linux VM from image and then attach (via Azure portal, like here) 10 disks of 1TB each. Today 1TB is the max size of a disk in Azure. As for the VM, you will need to have an Extra Large VM in order to accept this number of disks (up to 16 disks for an XL).

Resources