I have mounted an Azure Storage container to an Ubuntu 18.04 VM following official documentation.
Then I updated docker compose file (docker-compose.override.yml) by following CVAT Computer Vision Annotation Tool official documentation for mounting share storage to CVAT docker and docker-compose documentation as follows:
version: '3.3'
services:
cvat:
environment:
CVAT_SHARE_URL: 'Mounted from /mnt/share host directory'
volumes:
- cvat_share:/home/django/share:ro
volumes:
cvat_share:
driver_opts:
type: "nfs"
device: ":/mnt/share"
o: "addr=10.40.0.199,nolock,soft,rw"
Then I install CVAT following installation guide. But we I try to run the CVAT docker using the command docker-compose up -d, getting following error:
ERROR: for cvat Cannot create container for service cvat: failed to mount local volume: mount :/mnt/share:/opt/docker/volumes/cvat_cvat_share/_data, data: addr=10.40.0.199,nolock,soft: operation not supported
ERROR: Encountered errors while bringing up the project.
I tried different changes in the config file, but no luck. The CVAT documentation says you can mount the cloud storage as FUSE and use it later as share. But does it only support fuse protocol? How can I use a cloud storage mounted using NFS protocol in CVAT tool?
Didn't try with NFS but I faced a similar issue with GCSFuse to mount Google Cloud Storage.
What worked for me is to mount through fstab, let's say /mnt/cvat for the sake of the example, and then edit the docker-compose.override.yml file as follows,
version: '3.3'
services:
cvat:
environment:
CVAT_SHARE_URL: 'Mounted from /mnt/share host directory'
volumes:
- cvat_share_bbal:/home/django/share/cvat:ro
volumes:
cvat_share_bbal:
driver_opts:
type: none
device: /mnt/cvat
o: bind
When I had the device mounted at /home/django/share django was giving me errors.
Related
I'm currently attempting to use Azure's docker compose integration to deploy a visualization tool. The default compose file can be found here. Below is a snippet of the file:
services:
# other services omitted
cbioportal-database:
restart: unless-stopped
image: mysql:5.7
container_name: cbioportal-database-container
environment:
MYSQL_DATABASE: cbioportal
MYSQL_USER: cbio_user
MYSQL_PASSWORD: somepassword
MYSQL_ROOT_PASSWORD: somepassword
volumes:
- ./data/cgds.sql:/docker-entrypoint-initdb.d/cgds.sql:ro
- ./data/seed.sql.gz:/docker-entrypoint-initdb.d/seed.sql.gz:ro
- cbioportal_mysql_data:/var/lib/mysql
One of services it uses is based on the official mysql image, which allows the developer to put sql files into the /docker-entrypoint-initdb.d folder to be executed the first time the container starts. In this case, these are sql scripts used to create a database schema and seed it with some default data.
Below is a snippet I took from Docker's official documentation in my attempt to mount these files from a file share into the cbioportal-database container:
services:
# other services omitted
cbioportal-database:
volumes:
- cbioportal_data/mysql:/var/lib/mysql
- cbioportal_data/data/cgds.sql:/docker-entrypoint-initdb.d/cgds.sql:ro
- cbioportal_data/data/seed.sql.gz:/docker-entrypoint-initdb.d/seed.sql.gz:ro
volumes:
cbioportal_data:
driver: azure_file
driver_opts:
share_name: cbioportal-file-share
storage_account_name: cbioportalstorage
Obviously, that doesn't work. Is there a way to mount specific files from Azure file share into the cbioportal-database container so it can create the database, its schema, and seed it?
I tried to reproduce but unable mount a single file/folder from Azure File Share to Azure Container Instance. file share will treated as common once you mount as volume to the container or host.
Here one more thing notice whatever the files you will update in file share it will update in container as well after you mount the file share as volume to the container.
May this be the reason we can not mount specific file and folder as volume in container.
There are some limitations to this as well
• You can only mount Azure Files shares to Linux containers. Review more about the differences in feature support for Linux and Windows container groups in the overview.
• You can only mount the whole share and not the subfolders within it.
• Azure file share volume mount requires the Linux container run as root.
• Azure File share volume mounts are limited to CIFS support.
• Share cannot be mounted as read-only.
• You can mount multiple volumes but not with Azure CLI and would have to use ARM templates instead.
Reference: https://learn.microsoft.com/en-us/azure/container-instances/container-instances-volume-azure-files
https://www.c-sharpcorner.com/article/mounting-azure-file-share-as-volumes-in-azure-containers-step-by-step-demo/
My simple docker-compose.yaml file:
version: '3'
services:
website:
image: php:7.4-cli
container_name: php72
volumes:
- .hi:/var/www/html
ports:
- 8000:80
in folder hi/ I have just an index.php with a hello world print in it. (Do I need to have a Dockerfile here also?)
Now I just want to run this container with docker compose up:
$ docker compose up
host path ("/Users/xy/project/TEST/hi") not allowed as volume source, you need to reference an Azure File Share defined in the 'volumes' section
What has "docker compose" up to do with Azure? - I don't want to use Azure File Share at this moment, and I never mentioned or configured anything with Azure. I logged out of azure with: $az logout but got still this strange error on my macbook.
I've encountered the same issue as you but in my case I was trying to use an init-mongo.js script for a MongoDB in an ACI. I assume you were working in an Azure context at some point, I can't speak on that logout issue but I can speak to volumes on Azure.
If you are trying to use volumes in Azure, at least in my experience, (regardless if you want to use file share or not) you'll need to reference an Azure file share and not your host path.
Learn more about Azure file share: Mount an Azure file share in Azure Container Instances
Also according to the Compose file docs:
The top-level volumes key defines a named volume and references it from each service’s volumes list. This replaces volumes_from in earlier versions of the Compose file format.
So the docker-compose file should look something like this
docker-compose.yml
version: '3'
services:
website:
image: php:7.4-cli
container_name: php72
volumes:
- hi:/var/www/html
ports:
- 8000:80
volumes:
hi:
driver: azure_file
driver_opts:
share_name: <name of share>
storage_account_name: <name of storage account>
Then just place the file/folder you wanted to use in the file share that is driving the volume beforehand. Again, I'm not sure why you are encountering that error if you've never used Azure but if you do end up using volumes with Azure this is the way to go.
Let me know if this helps!
I was testing to deploy docker compose on Azure and I faced the same problem as yours
then I tried to use docker images and that gave me the clue:
it says: image command not available in current context, try to use default
so I found the command "docker context use default"
and it worked!
so Azure somehow changed the docker context, and you need to change it back
https://docs.docker.com/engine/context/working-with-contexts/
I'm trying to build a multiconatiner app with azure. I'm struggling with accessing persistent storage. In my docker-compose file I want to add the config file for my rabbitmq container. I've mounted a fileshare to the directory "/home/fileshare" which contains the def.json. On the cloud it doesn't seem to create the volume, as on startup rabbitmq can't find the file. If i do this locally and just save the file somewhere it works.
Docker Compose File:
version: '3.8'
services:
rabbitmq:
image: rabbitmq:3-management-alpine
volumes:
- /home/fileshare/def.json:/opt/rabbitmq-conf/def.json
expose:
- 5672
- 15672
environment:
RABBITMQ_DEFAULT_USER: guest
RABBITMQ_DEFAULT_PASS: guest
RABBITMQ_SERVER_ADDITIONAL_ERL_ARGS: -rabbitmq_management load_definitions "/opt/rabbitmq-conf/def.json"
networks:
- cloudnet
networks:
cloudnet:
driver: bridge
You need to use the WEBAPP_STORAGE_HOME env variable that is mapped to persistent storage at /home.
${WEBAPP_STORAGE_HOME}/fileshare/def.json:/opt/rabbitmq-conf/def.json
On my understanding, you want to mount the Azure File Share to the container rabbitmq and upload the def.json file to the file share so that you can access the def.json file inside the container.
Follow the steps here to mount the file share to the container. And it just supports to mount the file share, not the file directly.
The solution to this problem seems to be to use ftp to access the webapp and save the definitions file. Docker-Compose is in preview mode (since 2018), and a lot of the options are actually not supported. I tried mounting the storage to a single container app and used ssh to connect to it, and the file is exactly where one would expect it to be. With a multi-container app this doesn't work.
I feel like the docker-compose feature is not yet fully supported
Running Consul with docker desktop using windows containers and experimental mode turned on works well. However if I try mounting bitnami consul's datafile to a local volume mount I get the following error:
chown: cannot access '/bitnami/consul'
My compose file looks like this:
version: "3.7"
services:
consul:
image: bitnami/consul:latest
volumes:
- ${USERPROFILE}\DockerVolumes\consul:/bitnami
ports:
- '8300:8300'
- '8301:8301'
- '8301:8301/udp'
- '8500:8500'
- '8600:8600'
- '8600:8600/udp'
networks:
nat:
aliases:
- consul
If I remove the volumes part, everything works just fine, but I cannot persist my data. If followed instructions in the readme file. The speak of having the proper permissions, but I do not know how to get that to work using docker desktop.
Side note
If I do not mount /bitnami but /bitnami/consul, I get the following error:
2020-03-30T14:59:00.327Z [ERROR] agent: Error starting agent: error="Failed to start Consul server: Failed to start Raft: invalid argument"
Another option is to edit the docker-compose.yaml to deploy the consul container as root by adding the user: root directive:
version: "3.7"
services:
consul:
image: bitnami/consul:latest
user: root
volumes:
- ${USERPROFILE}\DockerVolumes\consul:/bitnami
ports:
- '8300:8300'
- '8301:8301'
- '8301:8301/udp'
- '8500:8500'
- '8600:8600'
- '8600:8600/udp'
networks:
nat:
aliases:
- consul
Without user: root the container is executed as non-root (user 1001):
▶ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
0c590d7df611 bitnami/consul:1 "/opt/bitnami/script…" 4 seconds ago Up 3 seconds 0.0.0.0:8300-8301->8300-8301/tcp, 0.0.0.0:8500->8500/tcp, 0.0.0.0:8301->8301/udp, 0.0.0.0:8600->8600/tcp, 0.0.0.0:8600->8600/udp bitnami-docker-consul_consul_1
▶ dcexec 0c590d7df611
I have no name!#0c590d7df611:/$ whoami
whoami: cannot find name for user ID 1001
But adding this line the container is executed as root:
▶ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
ac206b56f57b bitnami/consul:1 "/opt/bitnami/script…" 5 seconds ago Up 4 seconds 0.0.0.0:8300-8301->8300-8301/tcp, 0.0.0.0:8500->8500/tcp, 0.0.0.0:8301->8301/udp, 0.0.0.0:8600->8600/tcp, 0.0.0.0:8600->8600/udp bitnami-docker-consul_consul_1
▶ dcexec ac206b56f57b
root#ac206b56f57b:/# whoami
root
If the container is executed as root there shouldn't be any issue with the permissions in the host volume.
Consul container is a non-root container, in those cases, the non-root user should be able to write in the volume.
Using host directories as a volume you need to ensure that the directory you are mounting into the container has the proper permissions, in that case, writable permission for others. You can modify the permission by running sudo chmod o+x ${USERPROFILE}\DockerVolumes\consul (or the correct path to the host directory).
This local folder is created the first time you run docker-compose up or you can create it by yourself with mkdir. Once created (manually or automatically) you should give the proper permissions with chmod.
I am not familiar with Docker desktop nor Windows environments, but you should be able to do the equivalent actions using a CLI.
i am managing instances on goole cloud platform and deploying the docker image into GCP by using terraform script. The problem that I have now with the Terraform script is mounting a host directory into a docker container when docker image is started.
If I can manually run a docker command, i can do something like this.
docker run -v <host_dir>:<container_local_path> -it <image_id>
But I need to configure the mount directory in the Terraform Yaml. This is my Terraform YAML file.
spec:
containers:
- name: MyDocker
image: "docker_image_name"
ports:
- containerPort: 80
hostPort: 80
I have a directory (/www/project/data) in the host machine. This directory needs to be mounted into the docker container.
Does anybody know how to mount this directory into this yaml file?
Or any workaround is appreciated.
Thanks.
I found an annswer. please make sure 'dataDir' name has to match between 'volumeMounts and volumes'.
volumeMounts:
- name: 'dataDir'
mountPath: '/data/'
volumes:
- name: 'dataDir'
hostPath:
path: '/tmp/logs'
I am assuming that you are loading Docker images into a Container based Compute Engine. My recommendation is to determine your recipe for creating your GCE image and mounting your disk manually using the GCP console. The following will give guidance on that task.
https://cloud.google.com/compute/docs/containers/configuring-options-to-run-containers#mounting_a_host_directory_as_a_data_volume
Once you are able to achieve your desired GCP environment by hand, there appears to be a recipe for translating this into a Terraform script as documented here:
https://github.com/terraform-providers/terraform-provider-google/issues/1022
The high level recipe seems to be the recognition that the configuration of docker commands and specification is found in Metadata of the Compute Engine configuration. We can find the desired metadata by running the command manually and looking at the REST request that would achieve that. Once we know the metadata, we can transcribe that into the equivalent settings in the terraform script to be added by Terraform.