I understand that we can mount Azure File shares onto Ubuntu Linux VM. Right now, is there any alternative to moving data into SUSE Linux VM instead of Ubuntu VMs?
moreover, my data lies on Blob containers instead of fileshare. if needed, i can move data into Fileshare from Blob containers. Any suggestions, please.
You could mount Azure File shares onto Suse Linux VM. You could use the following to mount your File shares.
sudo zypper install samba*
##create your directory
mkdir -p /home/test
sudo mount -t cifs
//shuitestdiag630.file.core.windows.net/shuifile [mount point] -o
vers=3.0,username=shuitestdiag630,password=[storage account access
key],dir_mode=0777,file_mode=0777
More information please refer to the article
Also, you could download your blob data to your VM. You could use Azure CLI to manage your blob data. Azure CLI supports to run on Linux and download blobs. More information about Azure CLI please refer to the article
Related
I want to integrate and test Azure File Storage from my Web application, and before deploying it to the environment (where an Azure Shared folder is provisioned), I would like to test it on my local.
I have a docker container running Azurite on my local, and I am able to emulate an Azure Blob Storage Container on my local machine, connect with it and test.
I just want to be able to do the same for Azure File Storage. I don't see support for the same in Azurite or the deprecated Azure Storage Emulator. As per the 5th point of official Microsoft docs - (https://learn.microsoft.com/en-us/azure/storage/common/storage-use-emulator#differences-between-the-storage-emulator-and-azure-storage), "The File service and SMB protocol service endpoints aren't currently supported in the Storage Emulator.".
Is there a way to emulate File Storage on Azurite? Or any other supporting application, docker image, etc.?
One option is to mount the file share directly to a running docker container as a CIFS share. I tested this on the latest Ubuntu docker image pulled from docker hub. To authenticate to the file share, you'll need the storage account's name and access key which can be found in the portal view for your particular storage account.
Pull the latest image and run it specifying --privileged flag to avoid mount: <mount-path>: permission denied errors
docker pull ubuntu:latest
docker run -it --entrypoint /bin/sh --privileged ubuntu:latest
Install cifs-utils package in case its missing
apt update
apt install cifs-utils -y
In my example, the file share is named root so I mount it at /mnt/root in the container.
STORAGE_ACCOUNT_NAME="<your_storage_account>"
ACCESS_KEY="<access_key>"
mkdir /mnt/root
if [ ! -d "/etc/smbcredentials" ]; then
mkdir /etc/smbcredentials
fi
if [ ! -f "/etc/smbcredentials/$STORAGE_ACCOUNT_NAME.cred" ]; then
bash -c 'echo "username='$STORAGE_ACCOUNT_NAME'" >> /etc/smbcredentials/'$STORAGE_ACCOUNT_NAME'.cred'
bash -c 'echo "password='$ACCESS_KEY'" >> /etc/smbcredentials/'$STORAGE_ACCOUNT_NAME'.cred'
fi
chmod 600 /etc/smbcredentials/$STORAGE_ACCOUNT_NAME.cred
bash -c 'echo "//'$STORAGE_ACCOUNT_NAME'.file.core.windows.net/root /mnt/root cifs nofail,vers=3.0,credentials=/etc/smbcredentials/'$STORAGE_ACCOUNT_NAME'.cred,dir_mode=0777,file_mode=0777,serverino" >> /etc/fstab'
mount -t cifs //$STORAGE_ACCOUNT_NAME.file.core.windows.net/root /mnt/root -o vers=3.0,credentials=/etc/smbcredentials/$STORAGE_ACCOUNT_NAME.cred,dir_mode=0777,file_mode=0777,serverino,nosharesock,actimeo=30
Similar instructions to mount the share can also be found through the portal: Your-Storage-Account > File shares > Your-share > Connect.
There is currently no emulator for File shares, but one of the workarounds is that you may use PowerShell by creating a storage account in Azure and leverage the file share as a local drive on your local machine.
For more information on this, you can follow Use an Azure file share with Windows
I run these docker commands locally to copy a file to a volume, this works fine:
docker container create --name temp_container -v temp_vol:/target hello-world
docker cp somefile.txt temp_container:/target/.
Now I want to do the same, but with volumes located in Azure. I have an image azureimage that I pushed and it's located in Azure, and I need to access from the container a volume with a file that I have in my local disk.
I can create the volume in an Azure context like so:
docker context use azaci
docker volume create test-volume --storage-account mystorageaccount
But when I try to copy a file to the volume pointed by a container:
docker context use azaci
docker container create --name temp_container2 -v test-volume:/target azureimage
docker cp somefile.txt temp_container2:/target/.
I get that the container and copy commands cannot be executed in the Azure context:
Command "container" not available in current context (azaci), you can
use the "default" context to run this command
Command "cp" not available in current context (azaci), you can use
the "default" context to run this command
How to copy a file from my local disk to volume in Azure context? Do I have to upload it to Azure first? Do I have to copy it to the file share?
As I know, when you mount the Azure File Share to the ACI, then you should upload the files into the File Share and the files will exist in the container instance that you mount. You can use the Azure CLI command az storage file upload or AzCopy to upload the files.
The command docker cp aims to copy files/folders between a container and the local filesystem. But the File Share is in the Azure Storage, not local. And the container is also in the Azure ACI.
For development: I have a node API deployed in a docker container. That docker container runs in a Linux virtual machine.
For deployment: I push the docker image to Azure (ARC) and then our admin creates the container (ACI).
I need to copy a data file named "config.json" in a shared volume "data_storage" in Azure.
I don't understand how to write the command in dockerfile that will copy the JSON file in Azure because when I build the dockerfile I am building the image, and that folder should be mapped when creating the container with "docker run -v", so not at the stage of building the image.
Any help please?
As I know, you can put the copy action and all the actions that depend on the JSON file into a script and execute this script as a command when you run this image so that you do not need the JSON file in the image creation. And when you run the image and the volume mounted, the JSON file already exists in the container and all the actions include the copy action will go ahead.
For the ACI, you can store the JSON file in the Azure File Share and mount it to the ACI, follow the steps that Mount an Azure file share in Azure Container Instances.
to copy your file to the shared volume "data_storage" you will need to ask your admin to create the volume mount:
https://learn.microsoft.com/en-us/azure/container-instances/container-instances-volume-azure-files
Assuming you are referring to a data_storage that's an Azure File.
As per title, I have created an Azure App Service running a tomcat image (docker container).
When I have set up the path map to a FIle Share, the container or tomcat keeps on complaining that the folder that I mounted into the container itself is not writeable....
I read on Azure's website that File Share's mounted is Read/Write => https://learn.microsoft.com/en-us/azure/app-service/containers/how-to-serve-content-from-azure-storage
So I'm confused as to why it's still not working.... Any help would be really appreciated with this issue...
Not sure how you mounted the storage in your dockerfile, drop your snippet if possible, but I simply made the WORKDIR match the mount path. My dockerfile was a node app but shouldn't make a difference and realized that I didn't need the VOLUME keyword (it's commented).
I have a web application running in a Docker container. This application needs to access some files on our corporate file server (Windows Server with an Active Directory domain controller). The files I'm trying to access are image files created for our clients and the web application displays them as part of the client's portfolio.
On my development machine I have the appropriate folders mounted via entries in /etc/fstab and the host mount points are mounted in the Docker container via the --volume argument. This works perfectly.
Now I'm trying to put together a production container which will be run on a different server and which doesn't rely on the CIFS share being mounted on the host. So I tried to add the appropriate entries to the /etc/fstab file in the container & mounting them with mount -a. I get mount error(13): Permission denied.
A little research online led me to this article about Docker security. If I'm reading this correctly, it appears that Docker explicitly denies the ability to mount filesystems within a container. I tried mounting the shares read-only, but this (unsurprisingly) also failed.
So, I have two questions:
Am I correct in understanding that Docker prevents any use of mount inside containers?
Can anyone think of another way to accomplish this without mounting a CIFS share on the host and then mounting the host folder in the Docker container?
Yes, Docker is preventing you from mounting a remote volume inside the container as a security measure. If you trust your images and the people who run them, then you can use the --privileged flag with docker run to disable these security measures.
Further, you can combine --cap-add and --cap-drop to give the container only the capabilities that it actually needs. (See documentation) The SYS_ADMIN capability is the one that grants mount privileges.
yes
There is a closed issue mount.cifs within a container
https://github.com/docker/docker/issues/22197
according to which adding
--cap-add SYS_ADMIN --cap-add DAC_READ_SEARCH
to the run options will make mount -t cifs operational.
I tried it out and:
mount -t cifs //<host>/<path> /<localpath> -o user=<user>,password=<user>
within the container then works
You could use the smbclient command (part of the Samba package) to access the SMB/CIFS server from within the Docker container without mounting it, in the same way that you might use curl to download or upload a file.
There is a question on StackExchange Unix that deals with this, but in short:
smbclient //server/share -c 'cd /path/to/file; put myfile'
For multiple files there is the -T option which can create or extract .tar archives, however this looks like it would be a two step process (one to create the .tar and then another to extract it locally). I'm not sure whether you could use a pipe to do it in one step.
You can use a Netshare docker volume plugin which allows to mount remote CIFS/Samba as volumes.
Do not make your containers less secure by exposing many ports just to mount a share. Or by running it as --privileged
Here is how I solved this issue:
First mount the volume on the server that runs docker.
sudo mount -t cifs -o username=YourUserName,uid=$(id -u),gid=$(id -g) //SERVER/share ~/WinShare
Change the username, SERVER and WinShare here. This will ask your sudo password, then it will ask password for the remote share.
Let's assume you created WinShare folder inside your home folder. After running this command you should be able to see all the shared folders and files in WinShare folder. In addition to that since you use the uidand gid tags you will have write access without using sudo all the time.
Now you can run your container by using -v tag and share a volume between the server and the container.
Let's say you ran it like the following.
docker run -d --name mycontainer -v /home/WinShare:/home 2d244422164
You should be able to access the windows share and modify it from your container now.
To test it just do:
docker exec -it yourRunningContainer /bin/bash
cd /Home
touch testdocfromcontainer.txt
You should see testdocfromcontainer.txt in the windows share.