docker volume create minty
docker run -v minty:/Minty:rw mango
docker run -v minty:/Minty:rw banana
The mango container then creates several empty folders in /Minty and mounts filesystems on them. Unfortunately, the banana container can see the empty folders, but can't see any of the mounted filesystems.
I presume this is to do with Docker running each container in its own namespace or something. Does anybody know how to fix this?
I've seen several answers that claim to fix this, by making use of "data containers" and the --volumes-from option. However, it appears that data containers are a deprecated feature now. Regardless, I tried it, and it doesn't seem to make any difference; the second container sees only the empty mount points, not the newly-mounted filesystems.
Even bind-mounting a folder to the host doesn't allow the host to see the newly-mounted filesystems.
Related
I'm affected by an issue described in moby/moby/27886, meaning loop devices I create in a Docker container do not appear in the container, but they do on the host.
One of the potential workarounds is to mount /dev into the Docker container, so something like:
docker run -v /dev:/dev image:tag
I know that it works, at least on Docker Engine and the Linux kernel that I currently use (20.10.5 and 5.4.0-70-generic respectively), but I'm not sure how portable and safe it is.
In runc's libcontainer/SPEC for filesystems I found some detailed information on /dev mounts for containers, so I'm a bit worried that mounting /dev from the host might cause unwanted side-effects. I even found one problematic use-case that was fixed in 2018...
So, is it safe to mount /dev into a Docker container? Is it guaranteed to work?
I'm not asking if it works, because it seems it works, I'm interested if this is a valid usage, natively supported by Docker Engine.
I want to mount my usb drive into a running docker instance for manually backup of some files.
I know of the -v feature of docker run, but this creates a new container.
Note: its a nextcloudpi container.
You can only change a very limited set of container options after a container starts up. Options like environment variables and container mounts can only be set during the initial docker run or docker create. If you want to change these, you need to stop and delete your existing container, and create a new one with the new mount option.
If there's data that you think you need to keep or back up, it should live in some sort of volume mount anyways. Delete and restart your container and use a -v option to mount a volume on where the data is kept. The Docker documentation has an example using named volumes with separate backup and restore containers; or you can directly use a host directory and your normal backup solution there. (Deleting and recreating a container as I suggested in the first paragraph is extremely routine, and this shouldn't involve explicit "backup" and "restore" steps.)
If you have data that's there right now that you can't afford to lose, you can docker cp it out of the container before setting up a more robust storage scheme.
As David Maze mentioned, it's almost impossible to change the volume location of an existing container by using normal docker commands.
I found an alternative way that works for me. The main idea is convert the existing container to a new docker image and initialize a new docker container on top of it. Hope works for you too.
# Create a new image from the container
docker commit CONTAINERID NEWIMAGENAME
# Create a new container on the top of the new image
docker run -v HOSTLOCATION:CONTAINERLOCATION NEWIMAGENAME
I know the question is from May, but for future searchers:
Create a mounting point on the host filesystem:
sudo mkdir /mnt/usb-drive
Run the docker container using the --mount option and set the "bind propagation" to "shared":
docker run --name mynextcloudpi -it --mount type=bind,source=/mnt/usb-drive,target=/mnt/disk,bind-propagation=shared nextcloudpi
Now you can mount your USB drive to the /mnt/usb-drive directory and it will be mounted to the /mnt/disk location inside the running container.
E.g: sudo mount /dev/sda1 /mnt/usb-drive
Change the /dev/sda1, of course.
More info about bind-propagation: https://docs.docker.com/storage/bind-mounts/#configure-bind-propagation
My current development environment allows for automatic code reload whenever changing a file (i.e nodemon / webpack). However I am setting up a kubernetes (minikube) environment so that I can quickly open 3-4 related services at once.
Everything is working fine, but it is not currently doing the automatic code reload. I tried mounting the volume but there is some conflict with the way docker and virtualbox handles files such that the conflict leads to file changes from the host not reflected in docker container. (That's not the first link I have that appears related to this problem, it's just the first I found while googling it on another day)...
Anyways, long story short, ppl have trouble getting live reload done in development. I've found the problem literred throughout the interweb with very few solutions. The best solution I would say I found so far is This person used tar from the host to sync folders.
However I would like a solution from the container. The reason is that I want to run the script from the container so that the developer doesn't have to run some script on his host computer every time he starts developing in a particular repo.
In order to do this however I need to run rsync from the container to the host machine. And I'm having a surprising lot of trouble figuring out how to write the syntax for that.
Let's pretend my app exists in my container and host respectively as:
/workspace/app # location in container
/Users/terence/workspace/app # location in host computer
How do I rsync from the container to the host? I've tried using the 172.17.0.17 and 127.0.0.1 to no avail. Not entirely sure if there is a way to do it?
examples I tried:
rsync -av 172.17.0.17:Users/terence/workspace/app /workspace/app
rsync -av 127.0.0.1:Users/terence/workspace/app /workspace/app
If you're running the rsync from the host (not inside the container), you could use docker cp instead:
e.g., docker cp containerName:/workspace/app Users/terence/workspace/app
Could you clarify:
1. are you running the rsync from the host or from inside the container?
If it's from inside the container it'll depend a lot on the --network the container is attached to (i.e., bridged or host) and also the mounted volumes (i.e., when you started up the container did you use -v flag?)
Update: For rsync to work from within the container you need to expose the host's dir to the container.
As you think of a solution, keep this in mind: host dir as a data volume
Note: The host directory is, by its nature, host-dependent. For this reason, you can’t mount a host directory from Dockerfile, the VOLUME instruction does not support passing a host-dir, because built images should be portable. A host directory wouldn’t be available on all potential hosts.
Running docker on the Mac, with a centos image, I see mounted volumes taking on the ownership of the centos (internal) user, while on the filesystem the ownership is mine (mdf:mdf).
Using the same centos image on RHEL 7, I see the volumes mounted, but inside, in centos, the home dir and the files all show my uid (1055).
I can do a recursive chown to, e.g., insideguy:insideguy, and all looks right. But back in the host filesystem, the ownerships have changed to some other person in the registry that has the same uid as was selected for insideguy(1001) when useradd was executed.
Is there some fundamental limitation in docker for Linux that makes this happen?
As another side effect, in our cluster one cannot chown on a mounted filesystem, even with sudo privileges; only on a local filesystem. So the desire to keep the docker home directories in, e.g., ~/dockerhome, fails because docker seems to be trying (and failing) to perform some chowns (not described in the Dockerfile or the start script, so assumed to be part of the --volume treatment). Placed in /var or /opt with appropriate ownerships, all goes well.
Any idea what's different between the two docker hosts?
Specifics: OSX 10.11.6; docker v1.12.1 on mac, v1.12.2 on RHEL 7; centos 7
There is a fundamental limitation to Docker on OS X that makes this happen: that is the fact that Docker only runs on Linux.
When running Docker on other platforms, this requires first setting up a Linux VM (historically through VirtualBox, although more recently other options are available) and then running Docker inside that VM.
Because Docker is running natively on Linux, it is sharing filesystems directly with the host when you use something like docker run -v /host/path:/container/path. So if inside the container you run chown userA somefile and user A has userid 1001, and on your host that user id belongs to userB, then of course when you look at the files on the host they will appear to be owned by userB. There's no magic here; this is just how Unix file permissions work. You get the same behavior if, say, you were to move a disk or NFS filesystem from one host to another that had conflicting entries in their local /etc/passwd files.
Most Docker containers are running as root (or at least, not as your local user). This means that any files created by a process in Docker will typically not be owned by you, which can of course cause problems if you are trying to access a filesystem that does not permit this sort of access. Your choices when using Docker are pretty the same choices you have when not using Docker: either ensure that you are running containers as your own user id -- which may not be possible, since many images are built assuming they will be running as root -- or arrange to store files somewhere else.
This is one of the reasons why many people discourage the use of host volume mounts, because it can lead to this sort of confusion (and also because when interacting with a remote Docker API, the remote Docker daemon doesn't have any access to your local host filesystem).
With Docker for Mac, there is some magic file sharing that goes on to expose your local filesystem to the Linux VM (for example, with VirtualBox, Docker may use the shared folders feature). This translation layer is probably the cause of the behavior you've noted on OS X with respect to file ownership.
I have a web application running in a Docker container. This application needs to access some files on our corporate file server (Windows Server with an Active Directory domain controller). The files I'm trying to access are image files created for our clients and the web application displays them as part of the client's portfolio.
On my development machine I have the appropriate folders mounted via entries in /etc/fstab and the host mount points are mounted in the Docker container via the --volume argument. This works perfectly.
Now I'm trying to put together a production container which will be run on a different server and which doesn't rely on the CIFS share being mounted on the host. So I tried to add the appropriate entries to the /etc/fstab file in the container & mounting them with mount -a. I get mount error(13): Permission denied.
A little research online led me to this article about Docker security. If I'm reading this correctly, it appears that Docker explicitly denies the ability to mount filesystems within a container. I tried mounting the shares read-only, but this (unsurprisingly) also failed.
So, I have two questions:
Am I correct in understanding that Docker prevents any use of mount inside containers?
Can anyone think of another way to accomplish this without mounting a CIFS share on the host and then mounting the host folder in the Docker container?
Yes, Docker is preventing you from mounting a remote volume inside the container as a security measure. If you trust your images and the people who run them, then you can use the --privileged flag with docker run to disable these security measures.
Further, you can combine --cap-add and --cap-drop to give the container only the capabilities that it actually needs. (See documentation) The SYS_ADMIN capability is the one that grants mount privileges.
yes
There is a closed issue mount.cifs within a container
https://github.com/docker/docker/issues/22197
according to which adding
--cap-add SYS_ADMIN --cap-add DAC_READ_SEARCH
to the run options will make mount -t cifs operational.
I tried it out and:
mount -t cifs //<host>/<path> /<localpath> -o user=<user>,password=<user>
within the container then works
You could use the smbclient command (part of the Samba package) to access the SMB/CIFS server from within the Docker container without mounting it, in the same way that you might use curl to download or upload a file.
There is a question on StackExchange Unix that deals with this, but in short:
smbclient //server/share -c 'cd /path/to/file; put myfile'
For multiple files there is the -T option which can create or extract .tar archives, however this looks like it would be a two step process (one to create the .tar and then another to extract it locally). I'm not sure whether you could use a pipe to do it in one step.
You can use a Netshare docker volume plugin which allows to mount remote CIFS/Samba as volumes.
Do not make your containers less secure by exposing many ports just to mount a share. Or by running it as --privileged
Here is how I solved this issue:
First mount the volume on the server that runs docker.
sudo mount -t cifs -o username=YourUserName,uid=$(id -u),gid=$(id -g) //SERVER/share ~/WinShare
Change the username, SERVER and WinShare here. This will ask your sudo password, then it will ask password for the remote share.
Let's assume you created WinShare folder inside your home folder. After running this command you should be able to see all the shared folders and files in WinShare folder. In addition to that since you use the uidand gid tags you will have write access without using sudo all the time.
Now you can run your container by using -v tag and share a volume between the server and the container.
Let's say you ran it like the following.
docker run -d --name mycontainer -v /home/WinShare:/home 2d244422164
You should be able to access the windows share and modify it from your container now.
To test it just do:
docker exec -it yourRunningContainer /bin/bash
cd /Home
touch testdocfromcontainer.txt
You should see testdocfromcontainer.txt in the windows share.