Azure. How to Share Files Between Highly Available Web Servers - azure

We have 2 Ubuntu VMs inside Virtual Machine Flexible Orchestration that are behind Application Gateway and are running Apache Tomcat web servers. When a client connects to one of the VMs and uploads the files that files also need to exist on another Virtual Machine.
I only found 2 options to do that:
Azure File Share - $80/month for 1 TB of Hot SKU, but the speed is only 1 MBs when mounted as SMB share on Ubuntu.
Azure NetApp Files - $600/month for 4 TB minimum.
Both of the options are not good, the first one is to slow and the second one is too expensive. What can we use in the development environment and production environment to achieve file sharing between Highly Available VMs?

1MBs is awfully low, I am not sure where this is coming from. I am fairly sure I get about 30MBs for Standard SSD/HDD deployments when mounting them into Linux docker containers, which should not perform worse.
An alternative to the mounted file shares would be to use shared disks. You can basically attach a disk to multiple VMs at the same time.
There are some limitations, for your case mainly mainly:
Shared disks can be attached to individual VMSS instances but can't be defined in the VMSS models or automatically deployed.
You can still expect to pay 50-200$ for the disk, but you should be able to get much better speeds than what you are currently getting.

Use a Blob and grant access via Managed Identity to your Virtual Machines:
https://learn.microsoft.com/en-us/azure/active-directory/managed-identities-azure-resources/tutorial-vm-windows-access-storage
Blob Pricing and IOPS:
https://azure.microsoft.com/es-es/pricing/details/storage/page-blobs/

Related

Storage to cloud

we have a working Netapp with ESXi (VMWare 5.5) setup. With multiple VMs running on 3 ESXi Systems but they are residing entirely on Netapp Storage.
We are thinking of moving this entire setup to private Cloud consists of HP Nimble cloud storage. This cloud is currently owned by one of our departments and are ready to give us space(in terms of storage) and ESXis(VMI Cluster) to run our VMs on a rental basis. So immediate advantage for us is more space, more network speed, DR Setup and not anymore worry about the hardware.
Ofcourse it is in discussion phase but I still would like to ask you experts following questions.
Netapp Storage is all about data plus its configuration (Snapshot, User Quota Policies, Export Rules etc.). When we talk about storage space in cloud, then how are we going to control/administrate the configuration parts listed above? Or will this not be anymore possible to administrate? And the Cloud administrators take this control in their hands and we have to be dependant on them for every configuration changes? This is very important factor.
Will VMs running on Netapp Storage be migrated without much efforts? Is there any documented method for this?
Your view on this will be really helpful.
Thanx in advance.
Regards,
Admin
On point #1, a common way to provide multi-tenant administrator access on NetApp is to create a separate SVM [1] (Storage Virtual Machine) that a tenant administrator can use to manage volume capacity, snapshots, quotas, etc.
For #2, a common migration path for moving VMware VMs is to use Storage vMotion [2]. The private cloud provider can remap the ESXi hosts in your environment to be managed under their vCenter Server first. Then from there, they will have the ability to (non-disruptively, in most cases) move the VMs from your old NetApp datastores to new datastores on their array. They can do the same for vMotioning these VMs over to their managed ESXi hosts.
[1] https://docs.netapp.com/us-en/ontap/concepts/storage-virtualization-concept.html
[2] https://docs.vmware.com/en/VMware-vSphere/6.5/com.vmware.vsphere.vcenterhost.doc/GUID-AB266895-BAA4-4BF3-894E-47F99DC7B77F.html

Azure virtual machines with shared premium disk - cannot see data

We've got two Windows Server 2019 virtual machines within the same Azure subscription and subnet. Recently, we have created a Premium SSD azure data disk with 'sharing' enabled and mounted it to those two VMs without any problems. It's perfectly fine to use the disk from both of the machines, but unfortunately files/folders added from one of them are not visible on the other.
Is it possible to somehow truly share the data between the machines using such azure disk attached to both of them? Maybe some super secret PowerShell option/flag when mounting the drives?
The machines are within same domain so obviously we can simply share a folder (which's what we do right now), but here the problem is that whenever our application is writing something to that share, it takes ages due to latencies/long upload times (effectively, it freezes the application for couple of minutes). Yes, they are in the same region (machines and disk). There's this Proximity Placement Groups thing available, but it does not seem to be applicable to disks, unfortunately.
We've also tried Azure Files but we've got exactly same problem as with 'shared folder' within the domain (long upload times whenever our application is writing something to the persistent storage).
I've went through Shared drive between Azure Virtual Machines but there's nothing about seeing the same contents from all machines which have the disk attached and mounted.
Thank you! Would appreciate any ideas.
Right so eventually I've found the answer. Basically the machines have to be joined into a failover cluster. Assuming the shared SSD:
is formatted to NTFS on both machines
has the same label on both machines (in my case F:)
if the conditions are met, that particular disk can be added to Cluster Shared Volumes. Program has no problems with writing the data smoothly from both machines.

How to make a backup virtual machine

Can we use a virtual machine (Machine A) to take the backup of another virtual machines snapshot(Machine B). If we can do it what setup should we make (in machine A). Can you give me a working example with some real virtualization techniques.Assumption is that both the virtual machines are running on some cloud virtual machine management services for example like ovirt
Although it is a general question, I think the feature you are really looking for is snapshots.
I use a lot of cloud based VMs, most cloud provider offer you to snapshot your volumes, this is the preferred way to do backups in the cloud as it doesn't require you to stop or slow down your VM the back up is done at the disk level.
Later on you can restore your backups by creating an image out of your disk snapshots and spinning a new VM with this image.
On the other hand if you really need to backup a running at the filesystem level, you can have a look at rsync on linux/unix hosts. For Windows sorry I don't have a clue...

Possible to keep two vhd's in sync on azure vm's

This is scenario than a specific technical question.
I have two azure vm's who run a web application in load balanced mode.
as per this article http://asheej.blogspot.in/2014/03/load-balancing-using-windows-azure.html
both virtual machines are attached an additional disk which stores images which are referred from web application hosted in vm's IIS.
Now What would be the best way to keep contents on two vm hard drives in sync.
For example, If i delete, add a data from vhd of first vm then that should also be affected on second vm.
Is there anything possible, probably using a common vhd for both machines which will take sync out of question.
Before going into solution , let me briefly touch base on the VM and disk relationship.
Typically a VM contains 3 Disks attached to them 1. OS Disk 2. Temporary Disk and 3.Data Disks. The VM will have lease on all these disks ,the only way to write into data disks is via the VM.
The C: Disk is persistent, meaning when the VM get rebooted the data in the disk is retained. But the D:\ is non persistent , when you reboot the disk will be fully wiped clean. So at any point in time the D:\ shouldn't be used to store any user data.
So writing a process to sync between two VM's just to keep pictures in sync is not very ideal. You might know this already , but wanted to set context for the choice of options provided below.
Your potential options are as follows
You can setup File Share using the new Azure File Service (In Preview) http://blogs.technet.com/b/uspartner_ts2team/archive/2014/06/09/setting-up-a-file-share-for-the-new-azure-file-service.aspx. This will be single source for all your images and you don't need to worry about syncing of files.
2.Store the images in the Azure Blob and access them from the application that's running in the VM http://blogs.msdn.com/b/yaohuang1/archive/2012/07/02/asp-net-web-api-and-azure-blob-storage.aspx and http://www.nickharris.net/2012/11/how-to-upload-an-image-to-windows-azure-storage-using-mobile-services/
3.Host another VM as a Webserver and host your images from there. Then the two VM's can refer the image. The cost here will be to hosting the VM.
The key point with all the 3 potential options there is no need sync the files in two different places , everything is in single place.
Edited based on new information:-
In your scenario hosting your files into VM is not the right approach. You should take the following into consideration even for the short term solution , if you are using Azure LB.
Azure Load Balancer uses a 5 tuple (source IP, source port, destination IP, destination port, protocol type) to calculate the hash that and map traffic to available servers and also the distribution is fairly random. So if you load balance the VM, you cannot control which VM the images are accessed.
Manual updates is not possible in this scenario.
You either need to setup an virtual network to allow you to create and share a windows file share OR you should investigate the use of Azure File Service for creating a share that both VMs connect to (see: http://blogs.technet.com/b/uspartner_ts2team/archive/2014/06/09/setting-up-a-file-share-for-the-new-azure-file-service.aspx).

How do I mount a large NTFS volume with Azure?

I have a legacy application and third party software that both require NTFS volumes to operate. Changing the software would be a last resort.
The requirement is to have a central storage location for media (videos, images, etc) that each computer in a domain can access. The size requirement can be as high at 20 Terabytes.
My proposed solution is to create a domain and one of these computers to act as a simple file server with multiple volumes mounted and accessible from the other computers through DFS (Distributed File System). The reason why DFS is in the picture is we are looking to expand the DFS service to provide redundancy.
Is my proposed solution viable? I am willing to accept that I should be evaluating other storage/hosting solutions other than Azure that will allow me to meet the requirement.
Your best bet might be Windows Azure Virtual machines. In this model, an extra large virtual machine can mount 16 separate 1tb data drives. You'd have to combine multiple virtual machines to reach your 20tb requirements.
It sounds like a reasonable solution.
Using Windows Azure Drives will give you NTFS.
Azure Drives are stored as virtual harddisks (VHD) in Blob Storage. I believe 1 drive can max contain 1 TB of data (a Blob store limitation), so you will have to mount multiple drives.
This is an interesting article on sharing drives across multiple Role instances via SMB. Admittedly, I have not tried this myself.

Resources