Is it's possible to attach the same premium storage to multiple VMs so the files stored in the storage can be access in all of them.
The idea is to have a VM optimized for CPU that will calculate something and write results to the storage and have a low cost VM that will read the results and do other operations.
So if by saying "same" you mean same storage account - yes, you can do that, if by "same" you mean same VHD, no, you cant simultaneously attach same VHD to different VM's.
But you can have Azure Storage Files take on that role, it works like an SMB share, were you can store the results and other nodes will read them. Or you could just create a share on some VM that is supposed to read the results and store the results there.
Either way, its perfectly doable.
Related
I would like to know if there's a way to create an Azure storage container of a specific size, say 20 gb. I know it can be created without any restriction (I think up to 200 TB?), but can it be created with a specific size? What if I need that kind of set up? Like giving a user 20 gb initially, then at a later time increasing it to, say 50? Is that possible?
Like, how do I create that boundary/limitation for a new user that signs up my app?
Not possible with the service by itself. This should be a feature implemented in your app.
As mentioned in other answer, it is not possible to do with Blob Storage at the service level and you will have to implement your own logic to calculate the size of the blob container.
If restricting container size is the most important feature you are after, you may want to look at Azure File Storage. Equivalent to a blob container is a File Share there and you can set the quota for a File Share and change it dynamically. The quota of a File Share can be any value between 1GB - 5TB (100TB in case of Premium File Storage account) at the time of writing this answer.
Azure File Storage and Blob Storage are somewhat similar but they are meant to serve different purposes. However for simple object storage purposes you can use either of the two (File Storage is more expensive that Blob Storage though).
I have two VMs using the same vnet and I would like to be able to copy a directory that has thousands of files and is about 400mb.
I can use a UNC path to copy the files, but that takes 2 minutes. I’ve also tried using a storage account and created a file share, but that is also slow.
Are there any other Azure resources that might make getting files from one VM to another faster?
As the comment point it out, if you have two VMs in the same VNet, you should use its private address. Traffic between the two VMs is kept on the Azure backbone network. You could directly copy/paste the files from one VM to another VM when you RDP to that VM.
Also, different VM size has different performance. For the best performance, It's recommended that you migrate any VM disk that requires high IOPS to Premium Storage. VM disks that use Premium Storage store data on solid-state drives (SSDs).
High-performance Premium Storage and managed disks for VMs
we are using Azure VM Scale sets to compute a larger job. In some stages we want the machines to share data with each other. We have tried Blob Storage for this but it's way to slow.
We are looking at either make the computers talking to each other or a simpler solution having them share a network drive (a fast one, being close to the actual hardware). Is this available in Azure? How we have understood it Azure Files is as slow as Blob storage because it's on top of blob storage.
Is it possible to create a disk that is shared between VM's in an Azure Scale Set?
No, this is not possible. you might use network shares.
well, you can implement Software Defined Storage cluster, but thats probably overkill
I am using two Microsoft Azure Virtual Machines (marked as classic), both running on Linux. One is used for test purposes and internal demos, the other is production and running few of clients' instances.
What I would like to do is change the size of Virtual Machine. I understand this is quite common process and can easily be done from the Azure Management Portal and that this is not affecting data. However, when I have changed the size of our testing machine, exactly this has happened and we have lost all data.
Azure Support answer received was:
"We recommend you delete the VM by keeping the attached disks and create a new VM with the required size." Not sure why this would be better?
Any data stored on the ephemeral (internal-to-chassis) scratch disk is at risk, as it's a non-durable disk (and will in all likelihood be destroyed/recreated upon resizing a VM).
The only way to have durable data is to use Azure Storage (blobs, vhd as attached disk, Azure File storage) or external database. Azure Storage is durable (minimum 3 copies), and is not stored with your VM.
One more thing: The VM's OS Disk is a VHD in Azure Storage (so the OS disk is durable, just like attached vhd's).
You have more than one way to do that and keep in mind what David said, data on OS disks, attached disks and blobs is the only durable one.
To prevent losing data and since you're using Classic VMs, you can do the following:
1- Go to your VM on portal and capture an image out of it.
2- Go to your new image and create a new VM out of it, while specifying the new specs that you need.
3- When done, connect to your new VM while keeping the old one without termination.
4- Check if all your data is there, if yes, then you can remove the old one. (In case you need the old IP, you can still assign it to the new one).
Cheers.
I currently have a Rackspace Cloud Server that I'd like to migrate to an Azure Virtual Machine. I recently got an MSDN subscription which gives me a certain level of hosting via Azure at no cost, where I'm currently paying for that level of service with Rackspace.
However, one of the nice things about Rackspace is that I can schedule nightly/weekly backups of the VM image. Is there any mechanism for doing this on Azure? I'm worried about protecting against corruption of the database (i.e. what if someone were to run an UPDATE statement and forget the WHERE clause). Is there a mechanism for this with Azure?
I know the VMs are stored as .VHD files in my local Azure storage, but the VM image is 127 gigs. Downloading that nightly even with FIOS internet isn't really going to fly as a solution.
You can perform an asynchronous blob copy to make a physical copy of a vhd. See here for REST API details. This operation is very fast within the same data center (maybe a few seconds?). You don't need to make raw REST calls though: There's a method already implemented in the Azure cross-platform command line interface, available here. The command is:
azure vm disk upload
You can also take blob snapshots, and return to a previous snapshot later. A snapshot is read-only (which you can copy from later) and takes up no space initially. However, as storage pages are changed, the snapshot grows.
One question though: why such a large VM image? Are you storing OS + data on same vhd? If so, it may make more sense to mount a separate Azure Drive (also stored in VHD in blob storage) to store data, and make independent copies / snapshots.