I have a data processing VM in Azure with some existing data disks. Now, i would like to create a new volume (F:/) with 8TB capacity using two 4TB data disks. I am not sure how to do this by merging two different disks and making it one volume. Do someone help me please?
I am not sure how to do this by merging two different disks and
making it one volume
It seems you are using windows server.
You can use windows storage pools, in this way, you can merge two disks as one volume.
More information about storage pools, please refer to this blog.
Related
I have two VMs using the same vnet and I would like to be able to copy a directory that has thousands of files and is about 400mb.
I can use a UNC path to copy the files, but that takes 2 minutes. I’ve also tried using a storage account and created a file share, but that is also slow.
Are there any other Azure resources that might make getting files from one VM to another faster?
As the comment point it out, if you have two VMs in the same VNet, you should use its private address. Traffic between the two VMs is kept on the Azure backbone network. You could directly copy/paste the files from one VM to another VM when you RDP to that VM.
Also, different VM size has different performance. For the best performance, It's recommended that you migrate any VM disk that requires high IOPS to Premium Storage. VM disks that use Premium Storage store data on solid-state drives (SSDs).
High-performance Premium Storage and managed disks for VMs
we are using Azure VM Scale sets to compute a larger job. In some stages we want the machines to share data with each other. We have tried Blob Storage for this but it's way to slow.
We are looking at either make the computers talking to each other or a simpler solution having them share a network drive (a fast one, being close to the actual hardware). Is this available in Azure? How we have understood it Azure Files is as slow as Blob storage because it's on top of blob storage.
Is it possible to create a disk that is shared between VM's in an Azure Scale Set?
No, this is not possible. you might use network shares.
well, you can implement Software Defined Storage cluster, but thats probably overkill
Is it's possible to attach the same premium storage to multiple VMs so the files stored in the storage can be access in all of them.
The idea is to have a VM optimized for CPU that will calculate something and write results to the storage and have a low cost VM that will read the results and do other operations.
So if by saying "same" you mean same storage account - yes, you can do that, if by "same" you mean same VHD, no, you cant simultaneously attach same VHD to different VM's.
But you can have Azure Storage Files take on that role, it works like an SMB share, were you can store the results and other nodes will read them. Or you could just create a share on some VM that is supposed to read the results and store the results there.
Either way, its perfectly doable.
I am using two Microsoft Azure Virtual Machines (marked as classic), both running on Linux. One is used for test purposes and internal demos, the other is production and running few of clients' instances.
What I would like to do is change the size of Virtual Machine. I understand this is quite common process and can easily be done from the Azure Management Portal and that this is not affecting data. However, when I have changed the size of our testing machine, exactly this has happened and we have lost all data.
Azure Support answer received was:
"We recommend you delete the VM by keeping the attached disks and create a new VM with the required size." Not sure why this would be better?
Any data stored on the ephemeral (internal-to-chassis) scratch disk is at risk, as it's a non-durable disk (and will in all likelihood be destroyed/recreated upon resizing a VM).
The only way to have durable data is to use Azure Storage (blobs, vhd as attached disk, Azure File storage) or external database. Azure Storage is durable (minimum 3 copies), and is not stored with your VM.
One more thing: The VM's OS Disk is a VHD in Azure Storage (so the OS disk is durable, just like attached vhd's).
You have more than one way to do that and keep in mind what David said, data on OS disks, attached disks and blobs is the only durable one.
To prevent losing data and since you're using Classic VMs, you can do the following:
1- Go to your VM on portal and capture an image out of it.
2- Go to your new image and create a new VM out of it, while specifying the new specs that you need.
3- When done, connect to your new VM while keeping the old one without termination.
4- Check if all your data is there, if yes, then you can remove the old one. (In case you need the old IP, you can still assign it to the new one).
Cheers.
I want to create a linux VM that can accomodate up to 10 terabytes of data. Not sure how to accomplish that on Microsoft Azure or if it is even possible. Any insight would be appreciated.
You create the storage separately from the VM - they have a data servuce known as BLOB service in Azure
Here is a link http://azure.microsoft.com/en-us/documentation/articles/storage-dotnet-how-to-use-blobs/
supporting many terrabytes is not a problem
You create one Linux VM from image and then attach (via Azure portal, like here) 10 disks of 1TB each. Today 1TB is the max size of a disk in Azure. As for the VM, you will need to have an Extra Large VM in order to accept this number of disks (up to 16 disks for an XL).