How to emulate Azure File Storage on Local Machine? - azure

I want to integrate and test Azure File Storage from my Web application, and before deploying it to the environment (where an Azure Shared folder is provisioned), I would like to test it on my local.
I have a docker container running Azurite on my local, and I am able to emulate an Azure Blob Storage Container on my local machine, connect with it and test.
I just want to be able to do the same for Azure File Storage. I don't see support for the same in Azurite or the deprecated Azure Storage Emulator. As per the 5th point of official Microsoft docs - (https://learn.microsoft.com/en-us/azure/storage/common/storage-use-emulator#differences-between-the-storage-emulator-and-azure-storage), "The File service and SMB protocol service endpoints aren't currently supported in the Storage Emulator.".
Is there a way to emulate File Storage on Azurite? Or any other supporting application, docker image, etc.?

One option is to mount the file share directly to a running docker container as a CIFS share. I tested this on the latest Ubuntu docker image pulled from docker hub. To authenticate to the file share, you'll need the storage account's name and access key which can be found in the portal view for your particular storage account.
Pull the latest image and run it specifying --privileged flag to avoid mount: <mount-path>: permission denied errors
docker pull ubuntu:latest
docker run -it --entrypoint /bin/sh --privileged ubuntu:latest
Install cifs-utils package in case its missing
apt update
apt install cifs-utils -y
In my example, the file share is named root so I mount it at /mnt/root in the container.
STORAGE_ACCOUNT_NAME="<your_storage_account>"
ACCESS_KEY="<access_key>"
mkdir /mnt/root
if [ ! -d "/etc/smbcredentials" ]; then
mkdir /etc/smbcredentials
fi
if [ ! -f "/etc/smbcredentials/$STORAGE_ACCOUNT_NAME.cred" ]; then
bash -c 'echo "username='$STORAGE_ACCOUNT_NAME'" >> /etc/smbcredentials/'$STORAGE_ACCOUNT_NAME'.cred'
bash -c 'echo "password='$ACCESS_KEY'" >> /etc/smbcredentials/'$STORAGE_ACCOUNT_NAME'.cred'
fi
chmod 600 /etc/smbcredentials/$STORAGE_ACCOUNT_NAME.cred
bash -c 'echo "//'$STORAGE_ACCOUNT_NAME'.file.core.windows.net/root /mnt/root cifs nofail,vers=3.0,credentials=/etc/smbcredentials/'$STORAGE_ACCOUNT_NAME'.cred,dir_mode=0777,file_mode=0777,serverino" >> /etc/fstab'
mount -t cifs //$STORAGE_ACCOUNT_NAME.file.core.windows.net/root /mnt/root -o vers=3.0,credentials=/etc/smbcredentials/$STORAGE_ACCOUNT_NAME.cred,dir_mode=0777,file_mode=0777,serverino,nosharesock,actimeo=30
Similar instructions to mount the share can also be found through the portal: Your-Storage-Account > File shares > Your-share > Connect.

There is currently no emulator for File shares, but one of the workarounds is that you may use PowerShell by creating a storage account in Azure and leverage the file share as a local drive on your local machine.
For more information on this, you can follow Use an Azure file share with Windows

Related

Copy file to Docker volume in Azure context

I run these docker commands locally to copy a file to a volume, this works fine:
docker container create --name temp_container -v temp_vol:/target hello-world
docker cp somefile.txt temp_container:/target/.
Now I want to do the same, but with volumes located in Azure. I have an image azureimage that I pushed and it's located in Azure, and I need to access from the container a volume with a file that I have in my local disk.
I can create the volume in an Azure context like so:
docker context use azaci
docker volume create test-volume --storage-account mystorageaccount
But when I try to copy a file to the volume pointed by a container:
docker context use azaci
docker container create --name temp_container2 -v test-volume:/target azureimage
docker cp somefile.txt temp_container2:/target/.
I get that the container and copy commands cannot be executed in the Azure context:
Command "container" not available in current context (azaci), you can
use the "default" context to run this command
Command "cp" not available in current context (azaci), you can use
the "default" context to run this command
How to copy a file from my local disk to volume in Azure context? Do I have to upload it to Azure first? Do I have to copy it to the file share?
As I know, when you mount the Azure File Share to the ACI, then you should upload the files into the File Share and the files will exist in the container instance that you mount. You can use the Azure CLI command az storage file upload or AzCopy to upload the files.
The command docker cp aims to copy files/folders between a container and the local filesystem. But the File Share is in the Azure Storage, not local. And the container is also in the Azure ACI.

How to mount Azure App Service storage in Jenkins Docker Container?

I'm trying to host Jenkins in a Docker container in the Azure App Service. This means it's 'linux' hosting.
By default the jenkins/jenkins-2.110-alpine Docker image stores its data in the /var/jenkins_home folder in the container. I want this data/config persisted to Azure persistent storage so that it's persisted across container restarts.
I've read documentation and blogs stating that you can have container data persisted if it's stored in the /home folder.
So I've customized the Jenkins Dockerfile to look like this...
FROM jenkins/jenkins:2.110-alpine
USER root
RUN mkdir /home/jenkins
RUN ln -s /var/jenkins_home /home/jenkins
USER jenkins
However, when I deploy to Azure App Service I don't see the file in my /home folder (looking in Kudu console). The app starts just fine, but I lose all of my data when I restart my container.
What am I missing?
That's expected because you only persist a symlink (ln -s /var/jenkins_home /home/jenkins) on the Azure host. All the files physically exist inside the container.
To do this, you have to actually change Jenkins configuration to store all data in /home/jenkins which you have already created in your Dockerfile above.
A quick search for Jenkins data folder suggests that you set the environment variable JENKINS_HOME to your directory.
In your Dockerfile:
ENV JENKINS_HOME /home/jenkins

Copy files from Azure Blob Storage into azure SUSE LINUX VM

I understand that we can mount Azure File shares onto Ubuntu Linux VM. Right now, is there any alternative to moving data into SUSE Linux VM instead of Ubuntu VMs?
moreover, my data lies on Blob containers instead of fileshare. if needed, i can move data into Fileshare from Blob containers. Any suggestions, please.
You could mount Azure File shares onto Suse Linux VM. You could use the following to mount your File shares.
sudo zypper install samba*
##create your directory
mkdir -p /home/test
sudo mount -t cifs
//shuitestdiag630.file.core.windows.net/shuifile [mount point] -o
vers=3.0,username=shuitestdiag630,password=[storage account access
key],dir_mode=0777,file_mode=0777
More information please refer to the article
Also, you could download your blob data to your VM. You could use Azure CLI to manage your blob data. Azure CLI supports to run on Linux and download blobs. More information about Azure CLI please refer to the article

How to make an Azure VM & configure containers to use Azure File Storage via docker CLI / quickstart terminal?

I'm using the latest Docker Toolbox and I would like to launch docker containers on Azure that connect to an Azure File Store. What should one run to achieve this from the docker quick start terminal?
The easiest way to do this is to create an Ubuntu VM with Docker preinstalled on Azure:
https://azure.microsoft.com/en-us/blog/introducing-docker-in-microsoft-azure-marketplace/
Then follow the Azure File System Docker Volume Driver install instructions here:
https://github.com/Azure/azurefile-dockervolumedriver/blob/master/contrib/init/systemd/README.md
Once you can successfully create volumes on that VM, you can make them shared volumes or Data Volume Containers to share them between your Docker containers:
https://docs.docker.com/engine/tutorials/dockervolumes/
For more generic instructions, please use #rbj325's answer
Create docker-machine
First things first, we need an azure VM which we can use. We can use the docker-machine cli to create this. This set of instructions will create it with the ubuntu 16.04LTS to simplify(ish) installation steps.
docker-machine create --driver azure --azure-subscription-id XXXX \
--azure-location westeurope --azure-resource-group XXX \
--azure-image canonical:UbuntuServer:16.04.0-LTS:latest XXXXXX
This sets up everything we need on Azure.
Install azure file storage docker plugin
(Based on my knowledge of SSH) We then need to SSH into the docker-machine to be able to install the plugin.
docker-machine XXXXXX ssh
Once in, the following steps can be taken to install the plugin:
sudo -s
wget -qO /usr/bin/azurefile-dockervolumedriver https://github.com/Azure/azurefile-dockervolumedriver/releases/download/[VERSION]/azurefile-dockervolumedriver
chmod +x /usr/bin/azurefile-dockervolumedriver
wget -qO /etc/systemd/system/azurefile-dockervolumedriver.service https://raw.githubusercontent.com/Azure/azurefile-dockervolumedriver/master/contrib/init/systemd/azurefile-dockervolumedriver.service
cp [myconfigfile] /etc/default/
systemctl daemon-reload
systemctl enable azurefile-dockervolumedriver
systemctl start azurefile-dockervolumedriver
systemctl status azurefile-dockervolumedriver
Note that there are to things required here:
the latest version number for the driver from github
a file containing some azure storage credentials
For my installation process, I made a script that I could use and put my config file in a secure store that could be retrieved at install time. Please note it is gets the driver version 0.2.1.
Once this has completed, exit the ssh connection.
Create volumes
You should now be able to create docker volumes
docker volume create --name filestore -d azurefile -o share=filestore
Create docker containers
You can now use this volume with docker containers
docker run -it --name=example -v filestore:/filestore ubuntu /bin/bash

Mount SMB/CIFS share within a Docker container

I have a web application running in a Docker container. This application needs to access some files on our corporate file server (Windows Server with an Active Directory domain controller). The files I'm trying to access are image files created for our clients and the web application displays them as part of the client's portfolio.
On my development machine I have the appropriate folders mounted via entries in /etc/fstab and the host mount points are mounted in the Docker container via the --volume argument. This works perfectly.
Now I'm trying to put together a production container which will be run on a different server and which doesn't rely on the CIFS share being mounted on the host. So I tried to add the appropriate entries to the /etc/fstab file in the container & mounting them with mount -a. I get mount error(13): Permission denied.
A little research online led me to this article about Docker security. If I'm reading this correctly, it appears that Docker explicitly denies the ability to mount filesystems within a container. I tried mounting the shares read-only, but this (unsurprisingly) also failed.
So, I have two questions:
Am I correct in understanding that Docker prevents any use of mount inside containers?
Can anyone think of another way to accomplish this without mounting a CIFS share on the host and then mounting the host folder in the Docker container?
Yes, Docker is preventing you from mounting a remote volume inside the container as a security measure. If you trust your images and the people who run them, then you can use the --privileged flag with docker run to disable these security measures.
Further, you can combine --cap-add and --cap-drop to give the container only the capabilities that it actually needs. (See documentation) The SYS_ADMIN capability is the one that grants mount privileges.
yes
There is a closed issue mount.cifs within a container
https://github.com/docker/docker/issues/22197
according to which adding
--cap-add SYS_ADMIN --cap-add DAC_READ_SEARCH
to the run options will make mount -t cifs operational.
I tried it out and:
mount -t cifs //<host>/<path> /<localpath> -o user=<user>,password=<user>
within the container then works
You could use the smbclient command (part of the Samba package) to access the SMB/CIFS server from within the Docker container without mounting it, in the same way that you might use curl to download or upload a file.
There is a question on StackExchange Unix that deals with this, but in short:
smbclient //server/share -c 'cd /path/to/file; put myfile'
For multiple files there is the -T option which can create or extract .tar archives, however this looks like it would be a two step process (one to create the .tar and then another to extract it locally). I'm not sure whether you could use a pipe to do it in one step.
You can use a Netshare docker volume plugin which allows to mount remote CIFS/Samba as volumes.
Do not make your containers less secure by exposing many ports just to mount a share. Or by running it as --privileged
Here is how I solved this issue:
First mount the volume on the server that runs docker.
sudo mount -t cifs -o username=YourUserName,uid=$(id -u),gid=$(id -g) //SERVER/share ~/WinShare
Change the username, SERVER and WinShare here. This will ask your sudo password, then it will ask password for the remote share.
Let's assume you created WinShare folder inside your home folder. After running this command you should be able to see all the shared folders and files in WinShare folder. In addition to that since you use the uidand gid tags you will have write access without using sudo all the time.
Now you can run your container by using -v tag and share a volume between the server and the container.
Let's say you ran it like the following.
docker run -d --name mycontainer -v /home/WinShare:/home 2d244422164
You should be able to access the windows share and modify it from your container now.
To test it just do:
docker exec -it yourRunningContainer /bin/bash
cd /Home
touch testdocfromcontainer.txt
You should see testdocfromcontainer.txt in the windows share.

Resources