Mounting individual files from an Azure file share into a container - azure

I'm currently attempting to use Azure's docker compose integration to deploy a visualization tool. The default compose file can be found here. Below is a snippet of the file:
services:
# other services omitted
cbioportal-database:
restart: unless-stopped
image: mysql:5.7
container_name: cbioportal-database-container
environment:
MYSQL_DATABASE: cbioportal
MYSQL_USER: cbio_user
MYSQL_PASSWORD: somepassword
MYSQL_ROOT_PASSWORD: somepassword
volumes:
- ./data/cgds.sql:/docker-entrypoint-initdb.d/cgds.sql:ro
- ./data/seed.sql.gz:/docker-entrypoint-initdb.d/seed.sql.gz:ro
- cbioportal_mysql_data:/var/lib/mysql
One of services it uses is based on the official mysql image, which allows the developer to put sql files into the /docker-entrypoint-initdb.d folder to be executed the first time the container starts. In this case, these are sql scripts used to create a database schema and seed it with some default data.
Below is a snippet I took from Docker's official documentation in my attempt to mount these files from a file share into the cbioportal-database container:
services:
# other services omitted
cbioportal-database:
volumes:
- cbioportal_data/mysql:/var/lib/mysql
- cbioportal_data/data/cgds.sql:/docker-entrypoint-initdb.d/cgds.sql:ro
- cbioportal_data/data/seed.sql.gz:/docker-entrypoint-initdb.d/seed.sql.gz:ro
volumes:
cbioportal_data:
driver: azure_file
driver_opts:
share_name: cbioportal-file-share
storage_account_name: cbioportalstorage
Obviously, that doesn't work. Is there a way to mount specific files from Azure file share into the cbioportal-database container so it can create the database, its schema, and seed it?

I tried to reproduce but unable mount a single file/folder from Azure File Share to Azure Container Instance. file share will treated as common once you mount as volume to the container or host.
Here one more thing notice whatever the files you will update in file share it will update in container as well after you mount the file share as volume to the container.
May this be the reason we can not mount specific file and folder as volume in container.
There are some limitations to this as well
• You can only mount Azure Files shares to Linux containers. Review more about the differences in feature support for Linux and Windows container groups in the overview.
• You can only mount the whole share and not the subfolders within it.
• Azure file share volume mount requires the Linux container run as root.
• Azure File share volume mounts are limited to CIFS support.
• Share cannot be mounted as read-only.
• You can mount multiple volumes but not with Azure CLI and would have to use ARM templates instead.
Reference: https://learn.microsoft.com/en-us/azure/container-instances/container-instances-volume-azure-files
https://www.c-sharpcorner.com/article/mounting-azure-file-share-as-volumes-in-azure-containers-step-by-step-demo/

Related

Access CVAT connected file share from NFS mounted storage

I have mounted an Azure Storage container to an Ubuntu 18.04 VM following official documentation.
Then I updated docker compose file (docker-compose.override.yml) by following CVAT Computer Vision Annotation Tool official documentation for mounting share storage to CVAT docker and docker-compose documentation as follows:
version: '3.3'
services:
cvat:
environment:
CVAT_SHARE_URL: 'Mounted from /mnt/share host directory'
volumes:
- cvat_share:/home/django/share:ro
volumes:
cvat_share:
driver_opts:
type: "nfs"
device: ":/mnt/share"
o: "addr=10.40.0.199,nolock,soft,rw"
Then I install CVAT following installation guide. But we I try to run the CVAT docker using the command docker-compose up -d, getting following error:
ERROR: for cvat Cannot create container for service cvat: failed to mount local volume: mount :/mnt/share:/opt/docker/volumes/cvat_cvat_share/_data, data: addr=10.40.0.199,nolock,soft: operation not supported
ERROR: Encountered errors while bringing up the project.
I tried different changes in the config file, but no luck. The CVAT documentation says you can mount the cloud storage as FUSE and use it later as share. But does it only support fuse protocol? How can I use a cloud storage mounted using NFS protocol in CVAT tool?
Didn't try with NFS but I faced a similar issue with GCSFuse to mount Google Cloud Storage.
What worked for me is to mount through fstab, let's say /mnt/cvat for the sake of the example, and then edit the docker-compose.override.yml file as follows,
version: '3.3'
services:
cvat:
environment:
CVAT_SHARE_URL: 'Mounted from /mnt/share host directory'
volumes:
- cvat_share_bbal:/home/django/share/cvat:ro
volumes:
cvat_share_bbal:
driver_opts:
type: none
device: /mnt/cvat
o: bind
When I had the device mounted at /home/django/share django was giving me errors.

Host path not allowed as volume source, you need to reference an Azure File Share defined in the 'volumes' section

My simple docker-compose.yaml file:
version: '3'
services:
website:
image: php:7.4-cli
container_name: php72
volumes:
- .hi:/var/www/html
ports:
- 8000:80
in folder hi/ I have just an index.php with a hello world print in it. (Do I need to have a Dockerfile here also?)
Now I just want to run this container with docker compose up:
$ docker compose up
host path ("/Users/xy/project/TEST/hi") not allowed as volume source, you need to reference an Azure File Share defined in the 'volumes' section
What has "docker compose" up to do with Azure? - I don't want to use Azure File Share at this moment, and I never mentioned or configured anything with Azure. I logged out of azure with: $az logout but got still this strange error on my macbook.
I've encountered the same issue as you but in my case I was trying to use an init-mongo.js script for a MongoDB in an ACI. I assume you were working in an Azure context at some point, I can't speak on that logout issue but I can speak to volumes on Azure.
If you are trying to use volumes in Azure, at least in my experience, (regardless if you want to use file share or not) you'll need to reference an Azure file share and not your host path.
Learn more about Azure file share: Mount an Azure file share in Azure Container Instances
Also according to the Compose file docs:
The top-level volumes key defines a named volume and references it from each service’s volumes list. This replaces volumes_from in earlier versions of the Compose file format.
So the docker-compose file should look something like this
docker-compose.yml
version: '3'
services:
website:
image: php:7.4-cli
container_name: php72
volumes:
- hi:/var/www/html
ports:
- 8000:80
volumes:
hi:
driver: azure_file
driver_opts:
share_name: <name of share>
storage_account_name: <name of storage account>
Then just place the file/folder you wanted to use in the file share that is driving the volume beforehand. Again, I'm not sure why you are encountering that error if you've never used Azure but if you do end up using volumes with Azure this is the way to go.
Let me know if this helps!
I was testing to deploy docker compose on Azure and I faced the same problem as yours
then I tried to use docker images and that gave me the clue:
it says: image command not available in current context, try to use default
so I found the command "docker context use default"
and it worked!
so Azure somehow changed the docker context, and you need to change it back
https://docs.docker.com/engine/context/working-with-contexts/

Mounting from persistent storage into azure multi container app

I'm trying to build a multiconatiner app with azure. I'm struggling with accessing persistent storage. In my docker-compose file I want to add the config file for my rabbitmq container. I've mounted a fileshare to the directory "/home/fileshare" which contains the def.json. On the cloud it doesn't seem to create the volume, as on startup rabbitmq can't find the file. If i do this locally and just save the file somewhere it works.
Docker Compose File:
version: '3.8'
services:
rabbitmq:
image: rabbitmq:3-management-alpine
volumes:
- /home/fileshare/def.json:/opt/rabbitmq-conf/def.json
expose:
- 5672
- 15672
environment:
RABBITMQ_DEFAULT_USER: guest
RABBITMQ_DEFAULT_PASS: guest
RABBITMQ_SERVER_ADDITIONAL_ERL_ARGS: -rabbitmq_management load_definitions "/opt/rabbitmq-conf/def.json"
networks:
- cloudnet
networks:
cloudnet:
driver: bridge
You need to use the WEBAPP_STORAGE_HOME env variable that is mapped to persistent storage at /home.
${WEBAPP_STORAGE_HOME}/fileshare/def.json:/opt/rabbitmq-conf/def.json
On my understanding, you want to mount the Azure File Share to the container rabbitmq and upload the def.json file to the file share so that you can access the def.json file inside the container.
Follow the steps here to mount the file share to the container. And it just supports to mount the file share, not the file directly.
The solution to this problem seems to be to use ftp to access the webapp and save the definitions file. Docker-Compose is in preview mode (since 2018), and a lot of the options are actually not supported. I tried mounting the storage to a single container app and used ssh to connect to it, and the file is exactly where one would expect it to be. With a multi-container app this doesn't work.
I feel like the docker-compose feature is not yet fully supported

terraform - mounting a directory in yaml

i am managing instances on goole cloud platform and deploying the docker image into GCP by using terraform script. The problem that I have now with the Terraform script is mounting a host directory into a docker container when docker image is started.
If I can manually run a docker command, i can do something like this.
docker run -v <host_dir>:<container_local_path> -it <image_id>
But I need to configure the mount directory in the Terraform Yaml. This is my Terraform YAML file.
spec:
containers:
- name: MyDocker
image: "docker_image_name"
ports:
- containerPort: 80
hostPort: 80
I have a directory (/www/project/data) in the host machine. This directory needs to be mounted into the docker container.
Does anybody know how to mount this directory into this yaml file?
Or any workaround is appreciated.
Thanks.
I found an annswer. please make sure 'dataDir' name has to match between 'volumeMounts and volumes'.
volumeMounts:
- name: 'dataDir'
mountPath: '/data/'
volumes:
- name: 'dataDir'
hostPath:
path: '/tmp/logs'
I am assuming that you are loading Docker images into a Container based Compute Engine. My recommendation is to determine your recipe for creating your GCE image and mounting your disk manually using the GCP console. The following will give guidance on that task.
https://cloud.google.com/compute/docs/containers/configuring-options-to-run-containers#mounting_a_host_directory_as_a_data_volume
Once you are able to achieve your desired GCP environment by hand, there appears to be a recipe for translating this into a Terraform script as documented here:
https://github.com/terraform-providers/terraform-provider-google/issues/1022
The high level recipe seems to be the recognition that the configuration of docker commands and specification is found in Metadata of the Compute Engine configuration. We can find the desired metadata by running the command manually and looking at the REST request that would achieve that. Once we know the metadata, we can transcribe that into the equivalent settings in the terraform script to be added by Terraform.

Inline nginx conf in composer.yml file: possible?

I want to setup a reverse proxy with authentication for my webapp, here's my composer file:
version: '3'
services:
nginx:
image: registry.azurecr.io/nginx:latest
container_name: nginx
volumes:
- ./nginx/nginx.conf:/etc/nginx/nginx.conf
- ./nginx/.htpasswd:/etc/nginx/.htpasswd
ports:
- 80:80
myapp:
image: registry.azurecr.io/myapp:latest
container_name: myapp
expose:
- 8080
As you can see I rely on two files to edit the nginx configuration. Problem is that I want to deploy this application to azure's app services but azure does not allow to specify external configurations (as far as I know).
So, is there a way to specify a couple of username/passwords in this same composer file?
For your requirement, you can use the persist storage for the web app or use the path mapping, then put the file in it. If you just want to put the file in the compose file, it seems it's impossible.
For persist storage, mount the persist volume to the path inside the container in the compose file, then enable the persist storage through setting the WEBSITES_ENABLE_APP_SERVICE_STORAGE as true in the app settings. For more details, see Use persistent storage in Docker Compose.
For path mapping, you need to create a storage account. Use the blob or the file share. See the document containerized app. Set the volume like this:
wordpress:
image: wordpress:latest
volumes:
- <custom-id>:<path_in_container>
For more details about steps, see Serve content from Azure Storage in App Service on Linux.

Resources