How to I manage Azure "files" from GUI and command line? - azure

I'm confused about the difference between "files" and other objects in Azure storage. I understand how to upload a file to a share using the Azure web console and command line, but in the Azure Storage Explorer I don't see either of these, but only see "blobs" and though I can upload "files" there using the explorer, I can't upload to or see any of my "file" "shares".
Is there a way to browse and manage "files" and "shares" using Azure Storage Explorer, or some other client or CLI tool (on OS X)?

It is the different services. Azure Storage is... the "umbrella" service that consists of some services - Queues (obvious :)), Tables (kind of a noSQL table storage), Blobs (binary large objects, from text files to multimedia) and Files (the service that implements the file shares that may be connected to the Virtual Machine, for example, as a file share).
They are different services that may be used from the Azure Storage Explorer, but it depends on what you want to use and/or implement. If you need to put just files, you may use blobs. If you need to attach the storage as a file share to the VM, then the Files service is what you need. Good comparison.
I am not sure if you can manage Files with the Azure Storage Explorer (UPD: checked - do not), but something like CloudXplorer is able to do that.

You can browse and add/edit/delete files in Azure File Shares similar to how you would any other file share after mounting. You can refer to these two articles on how to do so:
Mount Azure File Share in Windows
Mount Azure File Share in Linux
Alternatively, you can use CLI or PowerShell, see examples below:
PowerShell example
CLI example

Related

Doubts with Designing an Azure web app file manager

I am designing a web application and it needs to be in the Azure web app. The app is focused on managing files, so it needs to upload files and store them.
As is a cloud app, I suppose that I am not able to create a directory in the web app service. My question is if I have to use the benefits of Azure and create a Storage Account and if this is the solution, What will be the best storage solution, Blob or File?
Thank you in advance.
Best wishes
Container is a Blob Storage, which is a great option for programmable storage, where our program can read and write to the storage account.
If we don't want to allow websites and the public to access the files, we can choose the below options.
Blob Storage Containers can contain any binary files/ binary large objects, there is no ordering and hierarchy, we can have a virtual folder structure.
Containers are usually programmed to share files to access using Shared Access Signature and Access Policy
I suppose that I am not able to create a directory in the web app service
Azure Files is more useful for mounting a file share to a server and multiple servers can mount the same file share. It can have a quota.
File share has a Directory Structure, we can create Directories and Subdirectories in a Hierarchical manner compared to Containers.
The Connect option in the File Share gives you details on how to mount drive onto a Windows/Linux machine.
Use file storage if you need the shared drive protocol, if not we can design the applications and use blob storage.
As per your requirement, if you want to create Directories you can choose AzureFile Share.
Reference link Azure Blob and Fileshare storage mentioned by #deherman-MSFT

moving locally stored documented documents to azure

I want to spike whether azure and the cloud is a good fit for us.
We have a website where users upload documents to our currently hosted website.
Every document has an equivalent record in a database.
I am using terraform to create the azure infrastructure.
What is my best way of migrating the documents from the local file path on the server to azure?
Should I be using file storage or blob storage. I am confused about the difference.
Is there anything in terraform that can help with this?
Based on your comments, I would recommend storing them in Blob Storage. This service is suited for storing and serving unstructured data like files and images. There are many other features like redundancy, archiving etc. that you may find useful in your scenario.
File Storage is more suitable in Lift-and-Shift kind of scenarios where you're moving an on-prem application to the cloud and the application writes data to either local or network attached disk.
You may also find this article useful: https://learn.microsoft.com/en-us/azure/storage/common/storage-decide-blobs-files-disks
UPDATE
Regarding uploading files from local computer to Azure Storage, there are actually many options available:
Use a Storage Explorer like Microsoft's Storage Explorer.
Use AzCopy command-line tool.
Use Azure PowerShell Cmdlets.
Use Azure CLI.
Write your own code using any available Storage Client libraries or directly consuming REST API.

Azure storage sync mechanisms

I have a problem that I have been wracking my brain about and figured I would need some perspective and insight from people who are a lot more knowledgeable about this.
What I have currently: Web based application hosted in azure uses azure blob store to store files that are generated as part of data import processes. We have a seperate application that extends the original web application that allows users to upload files and these files are currently also stored in azure blob store.
Where I am trying to go: I have a requirement that wants the ability to map network file shares on a users laptop and be able to access these files that currently reside in the blob.
Since Azure blob does not support SMB I have no way of actually doing this with a blob store.
I could use Azure files in conjunction with a File Server running the sync agent. However, this requires a lot of work both in terms of refactoring, setup and some custom service that add remove permissions on the file server.
I'm wondering if there is a service or a piece of software that exists in the market currently that allows me to continue using blob and perhaps sync the blob files into a file server that can then allow users to access and open files using windows file explorer? I found one that looks like an open source project but only does a one way sync from the blob to the file share. Ideally I'd like to find a solution that does a two way sync like azure file sync does.
Any thoughts and ideas will be appreciated.
Since the max number of blob containers, file shares is unlimited. Per my understanding, you could leverage the following approaches:
Migrate the data from blob storage to azure file share instead of blob storage, then the subsequent file store is azure file storage.
Note: Currently you must specify storage account key when mounting file shares, details you could follow this feedback. I recommend that you'd better do not map network file shares on a users laptop.
You could still use the blob storage, and you could create each blob container for each user and generate each blob container SAS token for your users, then the users could leverage Azure Storage Explorer to manage their blob files or use AzCopy and other command tools to download the blob files into their laptop file system.
Note: For security consideration, you could combine a stored access policy with a SAS, in order to revoke the permissions, you just need to invalidate the related access policy instead of regenerating the account key. Details you could follow Controlling a SAS with a stored access policy and Shared Access Signatures, Part 2: Create and use a SAS with Blob storage.

Dynamic file hosting on Azure

I am using Windows Azure for a custom blog implementation. The blog uses CKEditor and the CKFinder file management plugin. Typically the file management plugin connects to a file system directory to store the files. I need to store these as if it was a local directory and serve them through HTTP requests. In Azure you cannot rely on the file system to maintain through recycles.
I assume you are to use Azure Storage, but am at a loss as to how to do this. Is there a way to "mount" these storage systems to the file system? Am I correct in my assumptions to use storage? If not any guidance as to what I am missing?
Thanks
Or, you could use AzureBlobDrive to mount blob storage as a drive in Azure directly (no VHD, no limitation on only one instance being able to write).
https://github.com/richorama/AzureBlobDrive
You can actually mount a page blob as an NTFS drive, which is then a "durable drive" (just like any other blob), and you access it via a drive letter, just like a locally-attached (but volatile) drive.
The issue is that, using mounted drives, you may only have one writer, so this might cause challenges when scaling to multiple instances.
Take a look at this MSDN post to see an example of mounting a drive. Notice that, while the example doesn't set up any cache, you can specify a cache size. The cache is stored on a local disk resource.
EDIT: For a tutorial, download the Windows Azure Training Kit. Go to hands-on labs, and open Exploring Windows Azure Storage. Check out Exercise 4: Working with Drives.

Is it possible to mount blob storage to my local machine for deployment?

I have a build script that it would be very useful to configure to dump some files into Azure blob storage so they can be picked up by my Azure web role.
My preferred plan was to find some way of mounting the blob storage on my build server as a mapped drive and simply using Robocopy copy to copy the files over. This will involve the least ammount of friction as I already am deploying some files like this to other web servers using WebDrive.
I found a piece of software that will allow me to do that: http://www.gladinet.com/
However on further investigation I found that it needs port 80 to run without some hairy looking hacking about on the server.
So is there another piece of software I could use or perhaps another way I haven't considered, such as deploying the files to a local folder that is automagically synced with blob storage?
Update in response to #David Makogon
I am using http://waacceleratorumbraco.codeplex.com/ this performs 2 way synchronisation between the blob storage and the web roles. I have tested this with http://cloudberrylab.com/ and I can deploy files manually to the blob and they are deployed correctly to the web roles. Also I have done the reverse and updated files in the web roles which have then been synced back to the blob and I have subsequently edited/downloaded them from blob storage.
What I'm really looking for is a way to automate the cloudberry side of things. So I don't have a manual step to copy a few files over. I will investigate the Powershell solutions in the meantime.
I know this is an old post - but in case someone else comes here... the answer is now "yes". I've been working on a CodePlex project to do exactly that. (All source code is available).
http://azuredrive.codeplex.com/
If you're comfortable using powershell in your build process then you could use the Cerebrata Cmdlets to upload the files. If that doesn't work for you, you could write a custom activity (but this sounds quite a bit more involved).
Mounting a cloud drive from a non-Windows Azure compute instance (e.g. your local build machine) is not supported.
Having said that: Even if you could mount a Cloud Drive from your build machine, your compute instances would need access to it too, and there can only be one writer. If your compute instances only needed read-only access, they'd need to create a snapshot after you upload new files.
This really doesn't sound like a good idea though. As knightpfhor suggested, the Cerebrata cmdlets provide this capability (look at Import-File). This allows you to push individual files into their own blobs. You can optimize further by pushing a single ZIP file into a blob. You can then use a technique similar to the one described by Nate Totten in his multi-tenant web role sample, to detect new zip files and expand them to your local storage. Nate's blog post is here.
Oh, and if you don't want to use the Cerebrata cmdlets, you can upload blobs directly with the Windows Azure Storage REST API (though the cmdlets are very simple to use and integrate seamlessly with PowerShell).

Resources