Azure Cloud Shell File Share Contains 5GB IMG File - azure

I created a PowerShell cloud shell in Azure portal, configured to use an existing general purpose v2 storage account. Created a new file share and gave it a name. When I look inside the file share, I can see a folder ".cloudconsole" with one file inside "acc_[name].img". The size of the file is 5GB.
Question:
What is this ".img" file for?
Will there be cost associated by having this file in the storage account?

The cloud shell needs Azure File Share to act as clouddrive that store file. So it will ask you create storage account when you use cloud shell.
And the ".img" file is an image of a computer that it works for the cloud shell, and it's free. It just costs for the storage account. You can get more details here.

The previous answer does have the link to what the .img file is, standard Storage Account rates do apply. The Azure Pricing Calculator can give you the current pricing, but at the time I'm writing this, 5GB of Hot storage is about $0.10/mo plus read/write costs depending on how often you run it.

Related

Auto generated Azure could-shell-storage account

I have Azure Pay-As-You-Go subscription account having Azure Storage general purpose V1 Service where I store files. I wondered yesterday, when I found another storage account with different location which I haven't created. For details screen shot is given:
If you have any knowledge about it or faced same behavior on Azure Storage, please guide and share your experience as I want to know what it is for and why it has been created on different location as my other services are in different resource group on Notrh Europe Location.
please guide and share your experience as I want to know what it is
for and why it has been created on different location as my other
services are in different resource group on Notrh Europe Location.
When you use Azure cloud shell, on the initial start, Cloud Shell prompts you to associate a new or existing file share to persist files across sessions.
When you use basic settings and select only a subscription, Cloud
Shell creates three resources on your behalf in the supported region
that's nearest to you. The auto-generated storage account always names cs<uniqueGuid>, read here.
Also, Azure creates a disk image of your $Home directory to persist all contents within the directory. The disk image is saved in your specified file share as acc_<User>.img
at fileshare.storage.windows.net/fileshare/.cloudconsole/acc_<User>.img, and it automatically syncs changes.
About the region, it depended on the region when you select the associated Azure storage account when initially start with Cloud shell. Associated Azure storage accounts must reside in the same region as the Cloud Shell machine that you're mounting them to. Its region is totally not related to your other Azure resource group. You also could run clouddrive unmount to re-select an associated storage account for the Azure file share.
To find your current region you may run env in Bash and locate the variable ACC_LOCATION.

move file from azure vm to azure file storage

I have a Azure VM (Win 2016) where I have a folder where we have file coming every 5 minute.
Now I want to create and Window Service which will run on Azure VM and if any file exist, it will move to Azure File storage.
Could someone guide whats need to do or any other approach?
As I see it, you have 2 options:
Mount File Storage Share as a network drive. Once you mount the share as a network drive (and get a drive letter) you can simply use System.IO namespace to perform IO operations. Please see this link for more details: https://learn.microsoft.com/en-us/azure/storage/files/storage-how-to-use-files-windows.
Use Microsoft's Azure Storage SDK which is a wrapper over Azure Storage REST API and upload files from the local folder to the share in Azure File Storage. Please note that once the file is uploaded in Azure File Storage, you would need to manually delete the file from your server to achieve move operation.

Azure storage sync mechanisms

I have a problem that I have been wracking my brain about and figured I would need some perspective and insight from people who are a lot more knowledgeable about this.
What I have currently: Web based application hosted in azure uses azure blob store to store files that are generated as part of data import processes. We have a seperate application that extends the original web application that allows users to upload files and these files are currently also stored in azure blob store.
Where I am trying to go: I have a requirement that wants the ability to map network file shares on a users laptop and be able to access these files that currently reside in the blob.
Since Azure blob does not support SMB I have no way of actually doing this with a blob store.
I could use Azure files in conjunction with a File Server running the sync agent. However, this requires a lot of work both in terms of refactoring, setup and some custom service that add remove permissions on the file server.
I'm wondering if there is a service or a piece of software that exists in the market currently that allows me to continue using blob and perhaps sync the blob files into a file server that can then allow users to access and open files using windows file explorer? I found one that looks like an open source project but only does a one way sync from the blob to the file share. Ideally I'd like to find a solution that does a two way sync like azure file sync does.
Any thoughts and ideas will be appreciated.
Since the max number of blob containers, file shares is unlimited. Per my understanding, you could leverage the following approaches:
Migrate the data from blob storage to azure file share instead of blob storage, then the subsequent file store is azure file storage.
Note: Currently you must specify storage account key when mounting file shares, details you could follow this feedback. I recommend that you'd better do not map network file shares on a users laptop.
You could still use the blob storage, and you could create each blob container for each user and generate each blob container SAS token for your users, then the users could leverage Azure Storage Explorer to manage their blob files or use AzCopy and other command tools to download the blob files into their laptop file system.
Note: For security consideration, you could combine a stored access policy with a SAS, in order to revoke the permissions, you just need to invalidate the related access policy instead of regenerating the account key. Details you could follow Controlling a SAS with a stored access policy and Shared Access Signatures, Part 2: Create and use a SAS with Blob storage.

How to I manage Azure "files" from GUI and command line?

I'm confused about the difference between "files" and other objects in Azure storage. I understand how to upload a file to a share using the Azure web console and command line, but in the Azure Storage Explorer I don't see either of these, but only see "blobs" and though I can upload "files" there using the explorer, I can't upload to or see any of my "file" "shares".
Is there a way to browse and manage "files" and "shares" using Azure Storage Explorer, or some other client or CLI tool (on OS X)?
It is the different services. Azure Storage is... the "umbrella" service that consists of some services - Queues (obvious :)), Tables (kind of a noSQL table storage), Blobs (binary large objects, from text files to multimedia) and Files (the service that implements the file shares that may be connected to the Virtual Machine, for example, as a file share).
They are different services that may be used from the Azure Storage Explorer, but it depends on what you want to use and/or implement. If you need to put just files, you may use blobs. If you need to attach the storage as a file share to the VM, then the Files service is what you need. Good comparison.
I am not sure if you can manage Files with the Azure Storage Explorer (UPD: checked - do not), but something like CloudXplorer is able to do that.
You can browse and add/edit/delete files in Azure File Shares similar to how you would any other file share after mounting. You can refer to these two articles on how to do so:
Mount Azure File Share in Windows
Mount Azure File Share in Linux
Alternatively, you can use CLI or PowerShell, see examples below:
PowerShell example
CLI example

How to write to a tmp/temp directory in Windows Azure website

How would I write to a tmp/temp directory in windows azure website? I can write to a blob, but i'm using an NPM that requires me to give it file names so that it can directly write to those filenames.
Are you using Cloud Services (PaaS) or Virtual Machines (IaaS).
If PaaS, look at Windows Azure Local Storage. This option gives you up to 250gb of disk space per core. Its a great location for temporary storage of information in a way that traditional apps will be familiar with. However, its not persistent so if you put anything there you need to make sure will be available if the VM instance gets repaved, then copy it to Blob storage. Also, this storage is specific to a given role instance. So if you have two instances of the same role, they each have their own local storage buckets.
Alternatively, you can use Azure Drive, which allows you to keep the information persisted, but still doesn't allow multiple parallel writes.
If IaaS, then you can just mount a data disk to the VM and write to it directly. Data disks are already persisted to blob storage so there's little risk of data loss.
Just from my understanding and please correct me if anything wrong.
In Windows Azure Web Site, the content of your website will be stored in blob storage and mounted as a drive, which will be used for all instances your web site is using. And since it's in blob storage it's persistent. So if you need the local file system I think you can use the folders under your web site root path. But I don't think you can use the system tmp or temp folder.

Resources