Fast shared disk in a Azure VM Scale set cluster - azure

we are using Azure VM Scale sets to compute a larger job. In some stages we want the machines to share data with each other. We have tried Blob Storage for this but it's way to slow.
We are looking at either make the computers talking to each other or a simpler solution having them share a network drive (a fast one, being close to the actual hardware). Is this available in Azure? How we have understood it Azure Files is as slow as Blob storage because it's on top of blob storage.
Is it possible to create a disk that is shared between VM's in an Azure Scale Set?

No, this is not possible. you might use network shares.
well, you can implement Software Defined Storage cluster, but thats probably overkill

Related

Are there any options for high speed file sharing in Azure?

I have two VMs using the same vnet and I would like to be able to copy a directory that has thousands of files and is about 400mb.
I can use a UNC path to copy the files, but that takes 2 minutes. I’ve also tried using a storage account and created a file share, but that is also slow.
Are there any other Azure resources that might make getting files from one VM to another faster?
As the comment point it out, if you have two VMs in the same VNet, you should use its private address. Traffic between the two VMs is kept on the Azure backbone network. You could directly copy/paste the files from one VM to another VM when you RDP to that VM.
Also, different VM size has different performance. For the best performance, It's recommended that you migrate any VM disk that requires high IOPS to Premium Storage. VM disks that use Premium Storage store data on solid-state drives (SSDs).
High-performance Premium Storage and managed disks for VMs

Can we attach Azure Premium storage to multiple VM

Is it's possible to attach the same premium storage to multiple VMs so the files stored in the storage can be access in all of them.
The idea is to have a VM optimized for CPU that will calculate something and write results to the storage and have a low cost VM that will read the results and do other operations.
So if by saying "same" you mean same storage account - yes, you can do that, if by "same" you mean same VHD, no, you cant simultaneously attach same VHD to different VM's.
But you can have Azure Storage Files take on that role, it works like an SMB share, were you can store the results and other nodes will read them. Or you could just create a share on some VM that is supposed to read the results and store the results there.
Either way, its perfectly doable.

Can you replicate DFS share into Azure from On Prem?

Can you replicate DFS share into Azure from On Prem?
If so which storage do i need to do it?
Would it be easier to setup a virtual machine in Azure and use DFS-R to take advantage of it over a VPN tunnel?
Well, you can replicate it to Azure. But you would want to ship drives to Azure for such an amount of Data. Also, you can't have more then 64 data disks for a vm, so that's about ~63.9tb of data for a single VM maximum.
I'm not sure what's the use-case for 100tb on dfs, thou.

I would like to change Microsoft Azure Virtual Machine size without losing my data

I am using two Microsoft Azure Virtual Machines (marked as classic), both running on Linux. One is used for test purposes and internal demos, the other is production and running few of clients' instances.
What I would like to do is change the size of Virtual Machine. I understand this is quite common process and can easily be done from the Azure Management Portal and that this is not affecting data. However, when I have changed the size of our testing machine, exactly this has happened and we have lost all data.
Azure Support answer received was:
"We recommend you delete the VM by keeping the attached disks and create a new VM with the required size." Not sure why this would be better?
Any data stored on the ephemeral (internal-to-chassis) scratch disk is at risk, as it's a non-durable disk (and will in all likelihood be destroyed/recreated upon resizing a VM).
The only way to have durable data is to use Azure Storage (blobs, vhd as attached disk, Azure File storage) or external database. Azure Storage is durable (minimum 3 copies), and is not stored with your VM.
One more thing: The VM's OS Disk is a VHD in Azure Storage (so the OS disk is durable, just like attached vhd's).
You have more than one way to do that and keep in mind what David said, data on OS disks, attached disks and blobs is the only durable one.
To prevent losing data and since you're using Classic VMs, you can do the following:
1- Go to your VM on portal and capture an image out of it.
2- Go to your new image and create a new VM out of it, while specifying the new specs that you need.
3- When done, connect to your new VM while keeping the old one without termination.
4- Check if all your data is there, if yes, then you can remove the old one. (In case you need the old IP, you can still assign it to the new one).
Cheers.

Backup Microsoft Azure Virtual Machine

I currently have a Rackspace Cloud Server that I'd like to migrate to an Azure Virtual Machine. I recently got an MSDN subscription which gives me a certain level of hosting via Azure at no cost, where I'm currently paying for that level of service with Rackspace.
However, one of the nice things about Rackspace is that I can schedule nightly/weekly backups of the VM image. Is there any mechanism for doing this on Azure? I'm worried about protecting against corruption of the database (i.e. what if someone were to run an UPDATE statement and forget the WHERE clause). Is there a mechanism for this with Azure?
I know the VMs are stored as .VHD files in my local Azure storage, but the VM image is 127 gigs. Downloading that nightly even with FIOS internet isn't really going to fly as a solution.
You can perform an asynchronous blob copy to make a physical copy of a vhd. See here for REST API details. This operation is very fast within the same data center (maybe a few seconds?). You don't need to make raw REST calls though: There's a method already implemented in the Azure cross-platform command line interface, available here. The command is:
azure vm disk upload
You can also take blob snapshots, and return to a previous snapshot later. A snapshot is read-only (which you can copy from later) and takes up no space initially. However, as storage pages are changed, the snapshot grows.
One question though: why such a large VM image? Are you storing OS + data on same vhd? If so, it may make more sense to mount a separate Azure Drive (also stored in VHD in blob storage) to store data, and make independent copies / snapshots.

Resources