Azure file share not persisting - azure

I've got a storage account in Azure that I made a network drive on my PC. It seems as if the drive I configure does not maintain connectivity for very long.
It feels as if every time I reboot, I have to remove the drive and reconnect to Azure.
Is there some configuration I'm missing for this?

If you use the Azure file share in windows, you can persist the mount with the document Persisting Azure file share credentials in Windows here. If in Linux, follow the document Create a persistent mount point for the Azure file share with /etc/fstab here.

Related

AWS NFS mount needs to be moved to AZure

read this mount -t nfs vs cifs already :( ?
Our requirement is that we have an application hosted in AWS using nfs-utils to mount a EFS for use ? My question is how can this be done in Azure. I know they have Azure files which works in quite similar way to EFS but as per azure documentation it is done only through cifs-util. Point is that though it will mount a Azure file share in Azure will it work without any issue or do we need to do something in our commands to make it happen?
I am not good in linux, so please pardon me if I am sounding total stupid.
Our requirement is that we have an application hosted in AWS using
nfs-utils to mount a EFS for use ? My question is how can this be done
in Azure.
Amazon Elastic File System (Amazon EFS) provides simple, scalable file storage for use with Amazon EC2 instances in the AWS Cloud.
If you want do the same thing in Azure, I think you are talking about Azure storage blob(new disk).
In Azure, we can via Azure portal to add a new disk to Azure VM as a data disk, works like add a physical data disk to a host. Then we can use fdisk to create the file system on the new partition.
We can follow this article to attach a new disk to Azure VM via Azure portal.
After that completed, we can follow this article to initialize a new data disk in Linux.
I know they have Azure files which works in quite similar way to EFS
but as per azure documentation it is done only through cifs-util.
You are right, Azure files share works like EFS, but Azure files share use Server Message Block (SMB) protocol(also known as Common Internet File System, or CIFS).
The maximum size of an Azure file share is 5 Tib, there is a quota of 20,000 open handles on a single file, and the max IOPS per share is 1000 IOPS.
We can create data disk from Azure storage blob, the maxium size of data disk is 4 Tib(we can create multiple data disks to that VM), and OS disk is 2 Tib.
AWS EFS suppoer Network file system versions 4.0 and 4.1(NFSv4) protocal.
Here a article about performance about Azure file share and Azure storage blob.

Azure shared storage for application

Working in IaaS environment in AZURE and need to create a shared file for applications that will be sharing the same files uploaded by end users. The file share needs to be scene on various servers and appear as a fixed drive letter or mount point. Already created a Storage account and a file share in azure but can not overcome the issue that the mapped drive is associated with a users profile.
Was wondering if any has come up with a solution. ... I'm the system administrator assigned to this task and can do things in powershell or pass code information to developers for their review.
Did not resolve issue, developers are going to use Blog storage.
The trick with this was getting the application to see the drive letter. For us having a local user run as a service with the associated Azure file share mapping might have worked
NOTE to map the azure drive a use would need the Azure Storage account and Key generated for that account to access it.

How to access vm storage from webjob?

Because Azure has no native FTP capabilities, I created a small VM where clients can drop files. I have a separate webjob that will process those files but I can't figure out how to get access to the files from the webjob. I first created a blob thinking i could attach it to the VM but that doesn't seem to be possible. How is this done?
You can't access the C drive of another VM directly. Maybe you should use Azure files which is SMB file shares? No need for FTP. You can access that share from anywhere and it should be quite easy to use.
For small temporary the cost should be near zero.

Shared drive between Azure Virtual Machines

I have just moved my web site to an Azure Virtual Machine and have been up and running since last weekend. So far I'm very happy with the results and looking forward to taking advantage of Azure further in due course.
I do have what would seem to be a pretty common scenario - and, to my surprise, I can't find an obvious solution. I have a couple of VMs - one my primary server and the other which will be suspended and ready to kick in (manually is fine) if the first one has an issue. I backup my web site to Azure Storage (my backup utility supports saving to an Azure blob). That's the good news.
I had assumed that I could somehow mount the storage blob as a drive, therefore effectively having shared storage across the two VMs. However, to my surprise, I haven't found an obvious way to do that. I have found a third party utility (Gladinet Cloud Desktop) but it seems painfully slow. As I say, I admit I just assumed this would be an easy thing to do.
So, stepping back, what is the most straightforward way to access a storage blob from multiple VMs? I really don't want to set up a private network and then set up network file sharing - that seems so old school :) and places a specific dependency on one specific VM.
Any suggestions?
Thanks.
This is now not just possible, but very easy, and it looks just like a filesystem. Check out the new Azure File Service (in preview as of this writing).
http://blogs.msdn.com/b/windowsazurestorage/archive/2014/05/12/introducing-microsoft-azure-file-service.aspx
Quoting from the announcement:
"The Azure File service exposes file shares using the standard SMB 2.1 protocol. Applications running in Azure can now easily share files between VMs using standard and familiar file system APIs like ReadFile and WriteFile."
It is better than just an SMB drive, as the announcement goes on to mention:
"In addition, the files can also be accessed at the same time via a REST interface, which opens a variety of hybrid scenarios. Finally, Azure Files is built on the same technology as the Blob, Table, and Queue Services, which means Azure Files is able to leverage the existing availability, durability, scalability, and geo redundancy that is built into our platform."
In Azure Resource Manager "Storage Account" you can create a Network File Share that can be Mounted as a Drive to multiple VM's or to computers and devices not on Azure for both Unix, Linux and Windows.
In General, go to your Storage Account ➡ Files ➡ Create FileShare ➡ Name the Share and the Disk Space Quota ➡ Click Connect to obtain the command or windows or linux to mount the share to the respective devices. Note this ONLY WORKS for Local Redundant Storage, not Zone, not Geo Redundant.
https://www.youtube.com/watch?v=SGPJZMaSlis
The video tutorial above shows you step by step how to do this. The only restriction is needed is OS support of the SMB 3.0 protocol which Windows 8 or above does and Windows 2012 or above does. Requires Firewall Port 445 to be opened.
You can access blobs from multiple VMs. This is a very common pattern. What you can't do is mount a drive (stored in a blob) on multiple VMs simultaneously. That is, if you decide to create a VHD disk and attach it to a VM (whether Linux or Windows - doesn't matter), then the blob-backed disk is locked to a VM and that VM can then work with the vhd like it would a local file system.
If, on the other hand, you deal with blobs discretely as single objects, you can easily work with these blobs across any number of VMs.
If you're looking to do something like network sharing (e.g. SMB), you'd either need to use the Azure File Service or stage your own SMB server VM.
In the case where you absolutely must have a mounted file system, yet want to use the file system in a primary/backup fashion, you could always do something via the API to unmount from one VM and remount to another VM. This can be executed via PowerShell (Windows only) or via the cross-platform command-line interface on Linux/Mac/Windows. You'd do this if your primary VM failed for some reason.
this are good articles, I am also looking for that, hope find the right solution.
I hope you share your experience here with your choice.
Deciding when to use Azure Blobs, Azure Files, or Azure Disks
https://learn.microsoft.com/en-us/azure/storage/common/storage-decide-blobs-files-disks
there are premium disks
https://azure.microsoft.com/en-us/pricing/details/managed-disks/
Manually create and use a volume with Azure disks in Azure Kubernetes Service (AKS)
https://learn.microsoft.com/en-us/azure/aks/azure-disk-volume
Note : An Azure disk can only be mounted to a single pod at a time. If you need to share a persistent volume across multiple pods, use Azure Files.
Performance guidelines for SQL Server in Azure Virtual Machines
https://learn.microsoft.com/en-us/azure/virtual-machines/windows/sql/virtual-machines-windows-sql-performance
Deploy a SQL Server container in Kubernetes with Azure Kubernetes Services (AKS)
https://learn.microsoft.com/en-us/sql/linux/tutorial-sql-server-containers-kubernetes?view=sql-server-2017

Dynamic file hosting on Azure

I am using Windows Azure for a custom blog implementation. The blog uses CKEditor and the CKFinder file management plugin. Typically the file management plugin connects to a file system directory to store the files. I need to store these as if it was a local directory and serve them through HTTP requests. In Azure you cannot rely on the file system to maintain through recycles.
I assume you are to use Azure Storage, but am at a loss as to how to do this. Is there a way to "mount" these storage systems to the file system? Am I correct in my assumptions to use storage? If not any guidance as to what I am missing?
Thanks
Or, you could use AzureBlobDrive to mount blob storage as a drive in Azure directly (no VHD, no limitation on only one instance being able to write).
https://github.com/richorama/AzureBlobDrive
You can actually mount a page blob as an NTFS drive, which is then a "durable drive" (just like any other blob), and you access it via a drive letter, just like a locally-attached (but volatile) drive.
The issue is that, using mounted drives, you may only have one writer, so this might cause challenges when scaling to multiple instances.
Take a look at this MSDN post to see an example of mounting a drive. Notice that, while the example doesn't set up any cache, you can specify a cache size. The cache is stored on a local disk resource.
EDIT: For a tutorial, download the Windows Azure Training Kit. Go to hands-on labs, and open Exploring Windows Azure Storage. Check out Exercise 4: Working with Drives.

Resources