How to access vm storage from webjob? - azure

Because Azure has no native FTP capabilities, I created a small VM where clients can drop files. I have a separate webjob that will process those files but I can't figure out how to get access to the files from the webjob. I first created a blob thinking i could attach it to the VM but that doesn't seem to be possible. How is this done?

You can't access the C drive of another VM directly. Maybe you should use Azure files which is SMB file shares? No need for FTP. You can access that share from anywhere and it should be quite easy to use.
For small temporary the cost should be near zero.

Related

Shared drive between Azure Virtual Machines

I have just moved my web site to an Azure Virtual Machine and have been up and running since last weekend. So far I'm very happy with the results and looking forward to taking advantage of Azure further in due course.
I do have what would seem to be a pretty common scenario - and, to my surprise, I can't find an obvious solution. I have a couple of VMs - one my primary server and the other which will be suspended and ready to kick in (manually is fine) if the first one has an issue. I backup my web site to Azure Storage (my backup utility supports saving to an Azure blob). That's the good news.
I had assumed that I could somehow mount the storage blob as a drive, therefore effectively having shared storage across the two VMs. However, to my surprise, I haven't found an obvious way to do that. I have found a third party utility (Gladinet Cloud Desktop) but it seems painfully slow. As I say, I admit I just assumed this would be an easy thing to do.
So, stepping back, what is the most straightforward way to access a storage blob from multiple VMs? I really don't want to set up a private network and then set up network file sharing - that seems so old school :) and places a specific dependency on one specific VM.
Any suggestions?
Thanks.
This is now not just possible, but very easy, and it looks just like a filesystem. Check out the new Azure File Service (in preview as of this writing).
http://blogs.msdn.com/b/windowsazurestorage/archive/2014/05/12/introducing-microsoft-azure-file-service.aspx
Quoting from the announcement:
"The Azure File service exposes file shares using the standard SMB 2.1 protocol. Applications running in Azure can now easily share files between VMs using standard and familiar file system APIs like ReadFile and WriteFile."
It is better than just an SMB drive, as the announcement goes on to mention:
"In addition, the files can also be accessed at the same time via a REST interface, which opens a variety of hybrid scenarios. Finally, Azure Files is built on the same technology as the Blob, Table, and Queue Services, which means Azure Files is able to leverage the existing availability, durability, scalability, and geo redundancy that is built into our platform."
In Azure Resource Manager "Storage Account" you can create a Network File Share that can be Mounted as a Drive to multiple VM's or to computers and devices not on Azure for both Unix, Linux and Windows.
In General, go to your Storage Account ➡ Files ➡ Create FileShare ➡ Name the Share and the Disk Space Quota ➡ Click Connect to obtain the command or windows or linux to mount the share to the respective devices. Note this ONLY WORKS for Local Redundant Storage, not Zone, not Geo Redundant.
https://www.youtube.com/watch?v=SGPJZMaSlis
The video tutorial above shows you step by step how to do this. The only restriction is needed is OS support of the SMB 3.0 protocol which Windows 8 or above does and Windows 2012 or above does. Requires Firewall Port 445 to be opened.
You can access blobs from multiple VMs. This is a very common pattern. What you can't do is mount a drive (stored in a blob) on multiple VMs simultaneously. That is, if you decide to create a VHD disk and attach it to a VM (whether Linux or Windows - doesn't matter), then the blob-backed disk is locked to a VM and that VM can then work with the vhd like it would a local file system.
If, on the other hand, you deal with blobs discretely as single objects, you can easily work with these blobs across any number of VMs.
If you're looking to do something like network sharing (e.g. SMB), you'd either need to use the Azure File Service or stage your own SMB server VM.
In the case where you absolutely must have a mounted file system, yet want to use the file system in a primary/backup fashion, you could always do something via the API to unmount from one VM and remount to another VM. This can be executed via PowerShell (Windows only) or via the cross-platform command-line interface on Linux/Mac/Windows. You'd do this if your primary VM failed for some reason.
this are good articles, I am also looking for that, hope find the right solution.
I hope you share your experience here with your choice.
Deciding when to use Azure Blobs, Azure Files, or Azure Disks
https://learn.microsoft.com/en-us/azure/storage/common/storage-decide-blobs-files-disks
there are premium disks
https://azure.microsoft.com/en-us/pricing/details/managed-disks/
Manually create and use a volume with Azure disks in Azure Kubernetes Service (AKS)
https://learn.microsoft.com/en-us/azure/aks/azure-disk-volume
Note : An Azure disk can only be mounted to a single pod at a time. If you need to share a persistent volume across multiple pods, use Azure Files.
Performance guidelines for SQL Server in Azure Virtual Machines
https://learn.microsoft.com/en-us/azure/virtual-machines/windows/sql/virtual-machines-windows-sql-performance
Deploy a SQL Server container in Kubernetes with Azure Kubernetes Services (AKS)
https://learn.microsoft.com/en-us/sql/linux/tutorial-sql-server-containers-kubernetes?view=sql-server-2017

Azure Architecture Design

I'm new to Azure, and a little confused about blob storage. I have a need for clients to access via FTP / SFTP to push and pull files (XML, CSV, EDI, etc). The pushed files are read in by a .net application and written to a database. As I understand, we would use a VM role to create a FTP / SFTP server, a worker role to execute the .net code, SQL Storage for the DB and Blob storage for the files.
Am I correct in this assumption first, and second can a VM role attach a storage blob for writing and reading files and can a worker role attach to the same storage blob to read and write files as well.
Sample:
client pushed xml file to VM via FTP. VM writes XML file to storage. Worker role reads file, processes it and writes contents to db.
Is my thinking correct or am I missing the boat?
Thanks
Given Azure has an array of services so you have a few options. One important thing to keep in mind with Azure is that your worker roles, which are simply Windows Server 2008 without IIS installed, are very flexible so there is a lot you can do with them – this includes writing your own FTP server and being able to host it via a worker role VMs. The FTP to Azure Blob Storage Bridge (on CodePlex) solution is an example of this.
In addition, you could use a web role (which is the same as a worker role but with IIS enabled) to do the same - so rather than rolling your own FTP server you can use IIS. A visual guide to setting IIS up to run as an FTP server in Azure can be found on ITQ.
I’d recommend doing some further reading to determine which is the better option of the two. Also have a think about you requirements as this may influence your approach, i.e. scaling, bandwidth, costs, your preferred deployment model etc.
As far as storing the files goes you can certainly use Blob Storage. If you have no need for a relational database in your system then you could skip using SQL Azure altogether (in which case the web role solution referenced above won’t be of much use) – but again that comes down to your particular requirements.
The official Windows Azure website is a good source of knowledge, especially if you’re getting started, so do take the time to look through some of the pertinent documentation.

Migrating executables that require I/O API to azure

I have a number of executables that I want to migrate to azure. These executables require the I/O API in order to function properly. So in order to be able to migrate these executables I have to place them inside a cloud drive (VHD). But as far as I know a vhd can be mounted by one role at a time so how can I manage for let’s say 2 or more roles to mount the same VHD?
My first thought is to upload copies of the original VHD so each role can mount its own vhd and read/write to it and then save the files that all roles need to see in blob storage.
Make the VHD sharable via Using SMB to Share a Windows Azure Drive.
Which of the two solutions is better and is there another way I can deal with this issue?
Here's a link to an open source project that bridges System.IO calls to an ASP.NET cloud storage provider.

How to write to a tmp/temp directory in Windows Azure website

How would I write to a tmp/temp directory in windows azure website? I can write to a blob, but i'm using an NPM that requires me to give it file names so that it can directly write to those filenames.
Are you using Cloud Services (PaaS) or Virtual Machines (IaaS).
If PaaS, look at Windows Azure Local Storage. This option gives you up to 250gb of disk space per core. Its a great location for temporary storage of information in a way that traditional apps will be familiar with. However, its not persistent so if you put anything there you need to make sure will be available if the VM instance gets repaved, then copy it to Blob storage. Also, this storage is specific to a given role instance. So if you have two instances of the same role, they each have their own local storage buckets.
Alternatively, you can use Azure Drive, which allows you to keep the information persisted, but still doesn't allow multiple parallel writes.
If IaaS, then you can just mount a data disk to the VM and write to it directly. Data disks are already persisted to blob storage so there's little risk of data loss.
Just from my understanding and please correct me if anything wrong.
In Windows Azure Web Site, the content of your website will be stored in blob storage and mounted as a drive, which will be used for all instances your web site is using. And since it's in blob storage it's persistent. So if you need the local file system I think you can use the folders under your web site root path. But I don't think you can use the system tmp or temp folder.

Is it possible to mount blob storage to my local machine for deployment?

I have a build script that it would be very useful to configure to dump some files into Azure blob storage so they can be picked up by my Azure web role.
My preferred plan was to find some way of mounting the blob storage on my build server as a mapped drive and simply using Robocopy copy to copy the files over. This will involve the least ammount of friction as I already am deploying some files like this to other web servers using WebDrive.
I found a piece of software that will allow me to do that: http://www.gladinet.com/
However on further investigation I found that it needs port 80 to run without some hairy looking hacking about on the server.
So is there another piece of software I could use or perhaps another way I haven't considered, such as deploying the files to a local folder that is automagically synced with blob storage?
Update in response to #David Makogon
I am using http://waacceleratorumbraco.codeplex.com/ this performs 2 way synchronisation between the blob storage and the web roles. I have tested this with http://cloudberrylab.com/ and I can deploy files manually to the blob and they are deployed correctly to the web roles. Also I have done the reverse and updated files in the web roles which have then been synced back to the blob and I have subsequently edited/downloaded them from blob storage.
What I'm really looking for is a way to automate the cloudberry side of things. So I don't have a manual step to copy a few files over. I will investigate the Powershell solutions in the meantime.
I know this is an old post - but in case someone else comes here... the answer is now "yes". I've been working on a CodePlex project to do exactly that. (All source code is available).
http://azuredrive.codeplex.com/
If you're comfortable using powershell in your build process then you could use the Cerebrata Cmdlets to upload the files. If that doesn't work for you, you could write a custom activity (but this sounds quite a bit more involved).
Mounting a cloud drive from a non-Windows Azure compute instance (e.g. your local build machine) is not supported.
Having said that: Even if you could mount a Cloud Drive from your build machine, your compute instances would need access to it too, and there can only be one writer. If your compute instances only needed read-only access, they'd need to create a snapshot after you upload new files.
This really doesn't sound like a good idea though. As knightpfhor suggested, the Cerebrata cmdlets provide this capability (look at Import-File). This allows you to push individual files into their own blobs. You can optimize further by pushing a single ZIP file into a blob. You can then use a technique similar to the one described by Nate Totten in his multi-tenant web role sample, to detect new zip files and expand them to your local storage. Nate's blog post is here.
Oh, and if you don't want to use the Cerebrata cmdlets, you can upload blobs directly with the Windows Azure Storage REST API (though the cmdlets are very simple to use and integrate seamlessly with PowerShell).

Resources