I have a legacy application and third party software that both require NTFS volumes to operate. Changing the software would be a last resort.
The requirement is to have a central storage location for media (videos, images, etc) that each computer in a domain can access. The size requirement can be as high at 20 Terabytes.
My proposed solution is to create a domain and one of these computers to act as a simple file server with multiple volumes mounted and accessible from the other computers through DFS (Distributed File System). The reason why DFS is in the picture is we are looking to expand the DFS service to provide redundancy.
Is my proposed solution viable? I am willing to accept that I should be evaluating other storage/hosting solutions other than Azure that will allow me to meet the requirement.
Your best bet might be Windows Azure Virtual machines. In this model, an extra large virtual machine can mount 16 separate 1tb data drives. You'd have to combine multiple virtual machines to reach your 20tb requirements.
It sounds like a reasonable solution.
Using Windows Azure Drives will give you NTFS.
Azure Drives are stored as virtual harddisks (VHD) in Blob Storage. I believe 1 drive can max contain 1 TB of data (a Blob store limitation), so you will have to mount multiple drives.
This is an interesting article on sharing drives across multiple Role instances via SMB. Admittedly, I have not tried this myself.
Related
We have 2 Ubuntu VMs inside Virtual Machine Flexible Orchestration that are behind Application Gateway and are running Apache Tomcat web servers. When a client connects to one of the VMs and uploads the files that files also need to exist on another Virtual Machine.
I only found 2 options to do that:
Azure File Share - $80/month for 1 TB of Hot SKU, but the speed is only 1 MBs when mounted as SMB share on Ubuntu.
Azure NetApp Files - $600/month for 4 TB minimum.
Both of the options are not good, the first one is to slow and the second one is too expensive. What can we use in the development environment and production environment to achieve file sharing between Highly Available VMs?
1MBs is awfully low, I am not sure where this is coming from. I am fairly sure I get about 30MBs for Standard SSD/HDD deployments when mounting them into Linux docker containers, which should not perform worse.
An alternative to the mounted file shares would be to use shared disks. You can basically attach a disk to multiple VMs at the same time.
There are some limitations, for your case mainly mainly:
Shared disks can be attached to individual VMSS instances but can't be defined in the VMSS models or automatically deployed.
You can still expect to pay 50-200$ for the disk, but you should be able to get much better speeds than what you are currently getting.
Use a Blob and grant access via Managed Identity to your Virtual Machines:
https://learn.microsoft.com/en-us/azure/active-directory/managed-identities-azure-resources/tutorial-vm-windows-access-storage
Blob Pricing and IOPS:
https://azure.microsoft.com/es-es/pricing/details/storage/page-blobs/
We've got two Windows Server 2019 virtual machines within the same Azure subscription and subnet. Recently, we have created a Premium SSD azure data disk with 'sharing' enabled and mounted it to those two VMs without any problems. It's perfectly fine to use the disk from both of the machines, but unfortunately files/folders added from one of them are not visible on the other.
Is it possible to somehow truly share the data between the machines using such azure disk attached to both of them? Maybe some super secret PowerShell option/flag when mounting the drives?
The machines are within same domain so obviously we can simply share a folder (which's what we do right now), but here the problem is that whenever our application is writing something to that share, it takes ages due to latencies/long upload times (effectively, it freezes the application for couple of minutes). Yes, they are in the same region (machines and disk). There's this Proximity Placement Groups thing available, but it does not seem to be applicable to disks, unfortunately.
We've also tried Azure Files but we've got exactly same problem as with 'shared folder' within the domain (long upload times whenever our application is writing something to the persistent storage).
I've went through Shared drive between Azure Virtual Machines but there's nothing about seeing the same contents from all machines which have the disk attached and mounted.
Thank you! Would appreciate any ideas.
Right so eventually I've found the answer. Basically the machines have to be joined into a failover cluster. Assuming the shared SSD:
is formatted to NTFS on both machines
has the same label on both machines (in my case F:)
if the conditions are met, that particular disk can be added to Cluster Shared Volumes. Program has no problems with writing the data smoothly from both machines.
I see there are some limitations on Azure:
1. On number of disks to be attached to VM;
2. The size of each disk/storage blob is limited by 1TB;
Is there any hack or workaround to attach larger disks/several disks to the same VM without increasing the processing power of VM as my application doesn't need high computing capacities, but needs plenty of space.
May be it's possible by contacting their billing department?
Currently I'm using A1 Standard VM instance with 2 disks (2 TB it total) attached to it already. The goal is to attach 5 TB total disk space to the same VM without upgrading the VM size to a larger instance.
You will need to change your VM size to attach more disks. One option is to look at Basic tier instead of using Standard tier A Series VMs to optimize your cost. Since you do not need a lot of computing power, basic tier VMs may work fine for you. You will want to look at Basic A3 which will allow you to attach 8 maximum data disks of 1 TB each. See more information here (https://azure.microsoft.com/en-us/documentation/articles/virtual-machines-size-specs/)
Thanks,
Aung
I found a solution to attach 5TB folders as Azure File Sharing service.
It's possible via creating File Sharing containers through Azure Portal, then mounting the folder under Linux via CIFS (SMB3.0).
For those who are interested, there is an issue with mounting Windows File Sharing folders within CentOS 6.X under Azure. It works only with CentOS 7.X (keep it in mind).
You can use Storage Spaces in Azure to increment capacity and performance. The limit of the VHD is 1 TB per disk, using Storage Spaces you can pass this limitation. You need to have in mind that there is a limitation of disk to attach to the VM based on type you choose.
Sample explanation on:
https://blogs.msdn.microsoft.com/dfurman/2014/04/27/using-storage-spaces-on-an-azure-vm-cluster-for-sql-server-storage/
I'm using Azure Virtual Machines, specifically Linux. I went to add a blank disk ("attach...blank disk" in the portal) and discovered that Azure only allows a maximum size of 1023GB for disks. The portal won't allow you to specify a size beyond 1023GB.
What I'm looking for is a 4TB filesystem. The disks present themselves as /dev/xd?. I'm wondering if I could take four 1TB disks and stripe them (RAID 0) in the OS? If they're SAN disks then I'm not concerned about the redundancy since presumably they're already protected. I admit it sounds kind of hokey.
Is there another option to get bigger disks in Azure?
To be clear, I want persistent storage, not the ephemeral /mnt/storage.
You are correct. You need 4 disks in Raid0 to get 4TB of data. You can follow the guide below; just make sure to change parameters accordingly because the guide uses 3 disks only.
Configure Software RAID on Linux
https://azure.microsoft.com/en-us/documentation/articles/virtual-machines-linux-configure-raid/
Regarding redundancy, no matter what kind of storage you configured in Azure, the worst you can get is 3 mirrors for each disk so just go for full performance.
Azure Storage Replication
https://azure.microsoft.com/en-us/documentation/articles/storage-redundancy/
For Windows you may use Storage Spaces
http://blogs.msdn.com/b/dfurman/archive/2014/04/27/using-storage-spaces-on-an-azure-vm-cluster-for-sql-server-storage.aspx
https://technet.microsoft.com/en-us/library/hh831739.aspx
This is scenario than a specific technical question.
I have two azure vm's who run a web application in load balanced mode.
as per this article http://asheej.blogspot.in/2014/03/load-balancing-using-windows-azure.html
both virtual machines are attached an additional disk which stores images which are referred from web application hosted in vm's IIS.
Now What would be the best way to keep contents on two vm hard drives in sync.
For example, If i delete, add a data from vhd of first vm then that should also be affected on second vm.
Is there anything possible, probably using a common vhd for both machines which will take sync out of question.
Before going into solution , let me briefly touch base on the VM and disk relationship.
Typically a VM contains 3 Disks attached to them 1. OS Disk 2. Temporary Disk and 3.Data Disks. The VM will have lease on all these disks ,the only way to write into data disks is via the VM.
The C: Disk is persistent, meaning when the VM get rebooted the data in the disk is retained. But the D:\ is non persistent , when you reboot the disk will be fully wiped clean. So at any point in time the D:\ shouldn't be used to store any user data.
So writing a process to sync between two VM's just to keep pictures in sync is not very ideal. You might know this already , but wanted to set context for the choice of options provided below.
Your potential options are as follows
You can setup File Share using the new Azure File Service (In Preview) http://blogs.technet.com/b/uspartner_ts2team/archive/2014/06/09/setting-up-a-file-share-for-the-new-azure-file-service.aspx. This will be single source for all your images and you don't need to worry about syncing of files.
2.Store the images in the Azure Blob and access them from the application that's running in the VM http://blogs.msdn.com/b/yaohuang1/archive/2012/07/02/asp-net-web-api-and-azure-blob-storage.aspx and http://www.nickharris.net/2012/11/how-to-upload-an-image-to-windows-azure-storage-using-mobile-services/
3.Host another VM as a Webserver and host your images from there. Then the two VM's can refer the image. The cost here will be to hosting the VM.
The key point with all the 3 potential options there is no need sync the files in two different places , everything is in single place.
Edited based on new information:-
In your scenario hosting your files into VM is not the right approach. You should take the following into consideration even for the short term solution , if you are using Azure LB.
Azure Load Balancer uses a 5 tuple (source IP, source port, destination IP, destination port, protocol type) to calculate the hash that and map traffic to available servers and also the distribution is fairly random. So if you load balance the VM, you cannot control which VM the images are accessed.
Manual updates is not possible in this scenario.
You either need to setup an virtual network to allow you to create and share a windows file share OR you should investigate the use of Azure File Service for creating a share that both VMs connect to (see: http://blogs.technet.com/b/uspartner_ts2team/archive/2014/06/09/setting-up-a-file-share-for-the-new-azure-file-service.aspx).