Azure Blob storage vs Azure Drive - azure

I am looking at moving to Windows Azure rather than typical hosting however I'm unsure how best to store images. After searching I've found that there are 2 possible solutions - Blob storage or Azure drive.
I have looked into Blob storage and although I have begun to get used to the idea it will require quite a lot of modification to our CMS. In my searching I have just stumbled across Azure Drive which if I understand correctly creates a virtual hard drive which allows your application to run as it would on a normal server.
Are there any disadvantages to Azure Drive over blob storage? It sounds like migrating current applications to Azure will be much easier with Azure Drive rather than Blob storage but I just wanted to check that there weren't any major flaws in this.
Thanks
Pat

Yes, there are quite a few differences. First, the Windows Azure drive is actually a VHD uploaded as a page blob and mounted by a driver to provide a NTFS partition. So, to get any data on it, you must mount it (or a snapshot). Data is not directly accessible without mounting.
Next, Drives can only be mounted for RW by one instance. If you want anything else to even read that drive, you must snapshot and mount, which introduces a 'staleness' problem to read only instances that are mounting snapshots. You can work around this with an SMB share, but that is slightly complicated.
You would lose the ability to automatically get CDN capabilities if you used a drive as well.
Drives are great for their intended purpose - getting applications that must use NTFS to work in Windows Azure.
If you were to use Blobs natively, you would a.) get the storage subsystem to scale and remove the load from your instances for serving the data and b.) be able to use the CDN to get geoscale on the
images as well.
While it is some work, I would strongly recommend putting images in blob storage. It is ideal for it.

Related

Do I need Azure blob storage or just a simple web server on a VM?

I have a VM on Azure which is my content management system using nodejs and mongodb.
One of things the CMS does is have a social sharing function where html pages are created and users are given the url to this page.
I expect a large volume of users (probably 5000 at a given time) access the http pages. I do not want this load to be on the same server as my CMS.
So I was thinking about moving the html pages to another server. My question is do I need to look at Azure blob storage to do this or should I just use another VM and put files there?
The files are very small and minified. I want to keep my costs down whilst at the same time if I get more than 5000 requests, the server should auto scale.
The question itself is somewhat subjective/opinion-soliciting. And how you solve this problem is really up to you.
But from an objective perspective:
Blobs themselves are not the same as local file storage. If you're going to store content in them, either your CMS needs to support them natively or you're going to need to build that support into it (if that's even possible). Since they have their own REST API (and related SDKs) you cannot simple do file I/O operations against them. They are, however, accessible via URI (which may be made private or public).
Azure VMs store their disks (vhd's) in page blobs (so, you're already using blob storage technically speaking). And each VM may have attached disks (1TB each) also in page blobs, two disks per core (so a dual-core VM supports 4 attached 1TB disks). Just like your OS disk, these attached disks are durable, in blob storage. A CMS may access an attached disk once it's formatted and given a drive letter (Windows) or mounted (Linux). EDIT - forgot to mention: If you go with the attached-disk approach, you need to consider the fact that these disks are per-VM. That is, they are not shared across multiple VM's (in the event you scale your CMS to multiple instances).
Azure File Service is an SMB share sitting atop Azure Blob Storage. Again, durable storage, and drive-mappable. EDIT unlike attached disks, Azure File Service SMB shares are accessible across multiple VM's.

Shared drive between Azure Virtual Machines

I have just moved my web site to an Azure Virtual Machine and have been up and running since last weekend. So far I'm very happy with the results and looking forward to taking advantage of Azure further in due course.
I do have what would seem to be a pretty common scenario - and, to my surprise, I can't find an obvious solution. I have a couple of VMs - one my primary server and the other which will be suspended and ready to kick in (manually is fine) if the first one has an issue. I backup my web site to Azure Storage (my backup utility supports saving to an Azure blob). That's the good news.
I had assumed that I could somehow mount the storage blob as a drive, therefore effectively having shared storage across the two VMs. However, to my surprise, I haven't found an obvious way to do that. I have found a third party utility (Gladinet Cloud Desktop) but it seems painfully slow. As I say, I admit I just assumed this would be an easy thing to do.
So, stepping back, what is the most straightforward way to access a storage blob from multiple VMs? I really don't want to set up a private network and then set up network file sharing - that seems so old school :) and places a specific dependency on one specific VM.
Any suggestions?
Thanks.
This is now not just possible, but very easy, and it looks just like a filesystem. Check out the new Azure File Service (in preview as of this writing).
http://blogs.msdn.com/b/windowsazurestorage/archive/2014/05/12/introducing-microsoft-azure-file-service.aspx
Quoting from the announcement:
"The Azure File service exposes file shares using the standard SMB 2.1 protocol. Applications running in Azure can now easily share files between VMs using standard and familiar file system APIs like ReadFile and WriteFile."
It is better than just an SMB drive, as the announcement goes on to mention:
"In addition, the files can also be accessed at the same time via a REST interface, which opens a variety of hybrid scenarios. Finally, Azure Files is built on the same technology as the Blob, Table, and Queue Services, which means Azure Files is able to leverage the existing availability, durability, scalability, and geo redundancy that is built into our platform."
In Azure Resource Manager "Storage Account" you can create a Network File Share that can be Mounted as a Drive to multiple VM's or to computers and devices not on Azure for both Unix, Linux and Windows.
In General, go to your Storage Account ➡ Files ➡ Create FileShare ➡ Name the Share and the Disk Space Quota ➡ Click Connect to obtain the command or windows or linux to mount the share to the respective devices. Note this ONLY WORKS for Local Redundant Storage, not Zone, not Geo Redundant.
https://www.youtube.com/watch?v=SGPJZMaSlis
The video tutorial above shows you step by step how to do this. The only restriction is needed is OS support of the SMB 3.0 protocol which Windows 8 or above does and Windows 2012 or above does. Requires Firewall Port 445 to be opened.
You can access blobs from multiple VMs. This is a very common pattern. What you can't do is mount a drive (stored in a blob) on multiple VMs simultaneously. That is, if you decide to create a VHD disk and attach it to a VM (whether Linux or Windows - doesn't matter), then the blob-backed disk is locked to a VM and that VM can then work with the vhd like it would a local file system.
If, on the other hand, you deal with blobs discretely as single objects, you can easily work with these blobs across any number of VMs.
If you're looking to do something like network sharing (e.g. SMB), you'd either need to use the Azure File Service or stage your own SMB server VM.
In the case where you absolutely must have a mounted file system, yet want to use the file system in a primary/backup fashion, you could always do something via the API to unmount from one VM and remount to another VM. This can be executed via PowerShell (Windows only) or via the cross-platform command-line interface on Linux/Mac/Windows. You'd do this if your primary VM failed for some reason.
this are good articles, I am also looking for that, hope find the right solution.
I hope you share your experience here with your choice.
Deciding when to use Azure Blobs, Azure Files, or Azure Disks
https://learn.microsoft.com/en-us/azure/storage/common/storage-decide-blobs-files-disks
there are premium disks
https://azure.microsoft.com/en-us/pricing/details/managed-disks/
Manually create and use a volume with Azure disks in Azure Kubernetes Service (AKS)
https://learn.microsoft.com/en-us/azure/aks/azure-disk-volume
Note : An Azure disk can only be mounted to a single pod at a time. If you need to share a persistent volume across multiple pods, use Azure Files.
Performance guidelines for SQL Server in Azure Virtual Machines
https://learn.microsoft.com/en-us/azure/virtual-machines/windows/sql/virtual-machines-windows-sql-performance
Deploy a SQL Server container in Kubernetes with Azure Kubernetes Services (AKS)
https://learn.microsoft.com/en-us/sql/linux/tutorial-sql-server-containers-kubernetes?view=sql-server-2017

How to write to a tmp/temp directory in Windows Azure website

How would I write to a tmp/temp directory in windows azure website? I can write to a blob, but i'm using an NPM that requires me to give it file names so that it can directly write to those filenames.
Are you using Cloud Services (PaaS) or Virtual Machines (IaaS).
If PaaS, look at Windows Azure Local Storage. This option gives you up to 250gb of disk space per core. Its a great location for temporary storage of information in a way that traditional apps will be familiar with. However, its not persistent so if you put anything there you need to make sure will be available if the VM instance gets repaved, then copy it to Blob storage. Also, this storage is specific to a given role instance. So if you have two instances of the same role, they each have their own local storage buckets.
Alternatively, you can use Azure Drive, which allows you to keep the information persisted, but still doesn't allow multiple parallel writes.
If IaaS, then you can just mount a data disk to the VM and write to it directly. Data disks are already persisted to blob storage so there's little risk of data loss.
Just from my understanding and please correct me if anything wrong.
In Windows Azure Web Site, the content of your website will be stored in blob storage and mounted as a drive, which will be used for all instances your web site is using. And since it's in blob storage it's persistent. So if you need the local file system I think you can use the folders under your web site root path. But I don't think you can use the system tmp or temp folder.

Dynamic file hosting on Azure

I am using Windows Azure for a custom blog implementation. The blog uses CKEditor and the CKFinder file management plugin. Typically the file management plugin connects to a file system directory to store the files. I need to store these as if it was a local directory and serve them through HTTP requests. In Azure you cannot rely on the file system to maintain through recycles.
I assume you are to use Azure Storage, but am at a loss as to how to do this. Is there a way to "mount" these storage systems to the file system? Am I correct in my assumptions to use storage? If not any guidance as to what I am missing?
Thanks
Or, you could use AzureBlobDrive to mount blob storage as a drive in Azure directly (no VHD, no limitation on only one instance being able to write).
https://github.com/richorama/AzureBlobDrive
You can actually mount a page blob as an NTFS drive, which is then a "durable drive" (just like any other blob), and you access it via a drive letter, just like a locally-attached (but volatile) drive.
The issue is that, using mounted drives, you may only have one writer, so this might cause challenges when scaling to multiple instances.
Take a look at this MSDN post to see an example of mounting a drive. Notice that, while the example doesn't set up any cache, you can specify a cache size. The cache is stored on a local disk resource.
EDIT: For a tutorial, download the Windows Azure Training Kit. Go to hands-on labs, and open Exploring Windows Azure Storage. Check out Exercise 4: Working with Drives.

Is it possible to mount blob storage to my local machine for deployment?

I have a build script that it would be very useful to configure to dump some files into Azure blob storage so they can be picked up by my Azure web role.
My preferred plan was to find some way of mounting the blob storage on my build server as a mapped drive and simply using Robocopy copy to copy the files over. This will involve the least ammount of friction as I already am deploying some files like this to other web servers using WebDrive.
I found a piece of software that will allow me to do that: http://www.gladinet.com/
However on further investigation I found that it needs port 80 to run without some hairy looking hacking about on the server.
So is there another piece of software I could use or perhaps another way I haven't considered, such as deploying the files to a local folder that is automagically synced with blob storage?
Update in response to #David Makogon
I am using http://waacceleratorumbraco.codeplex.com/ this performs 2 way synchronisation between the blob storage and the web roles. I have tested this with http://cloudberrylab.com/ and I can deploy files manually to the blob and they are deployed correctly to the web roles. Also I have done the reverse and updated files in the web roles which have then been synced back to the blob and I have subsequently edited/downloaded them from blob storage.
What I'm really looking for is a way to automate the cloudberry side of things. So I don't have a manual step to copy a few files over. I will investigate the Powershell solutions in the meantime.
I know this is an old post - but in case someone else comes here... the answer is now "yes". I've been working on a CodePlex project to do exactly that. (All source code is available).
http://azuredrive.codeplex.com/
If you're comfortable using powershell in your build process then you could use the Cerebrata Cmdlets to upload the files. If that doesn't work for you, you could write a custom activity (but this sounds quite a bit more involved).
Mounting a cloud drive from a non-Windows Azure compute instance (e.g. your local build machine) is not supported.
Having said that: Even if you could mount a Cloud Drive from your build machine, your compute instances would need access to it too, and there can only be one writer. If your compute instances only needed read-only access, they'd need to create a snapshot after you upload new files.
This really doesn't sound like a good idea though. As knightpfhor suggested, the Cerebrata cmdlets provide this capability (look at Import-File). This allows you to push individual files into their own blobs. You can optimize further by pushing a single ZIP file into a blob. You can then use a technique similar to the one described by Nate Totten in his multi-tenant web role sample, to detect new zip files and expand them to your local storage. Nate's blog post is here.
Oh, and if you don't want to use the Cerebrata cmdlets, you can upload blobs directly with the Windows Azure Storage REST API (though the cmdlets are very simple to use and integrate seamlessly with PowerShell).

Resources