I have a privileged docker container that will mount a custom File System with FUSE.
It is achieved by bind mounting /dev,/sys from the host to the container and running some custom software which accessed block device e.g. /dev/sdX inside the container to mount a custom FS on some mount point, let's say /mnt/some_mountpoint_inside_the_container (everything still happens inside the container).
Now, I would like to access this mount point that are mounted inside the docker container from the host but with no avail. So far, I have tried:
In my docker-compose.yaml, I defined a (binded) volume from host to container, e.g.:
...
volumes:
- /mnt/mountpoint_at_host:/mnt/some_mountpoint_inside_the_container
...
Then, I have FUSE mounted the custom FS inside the container on /mnt/some_mountpoint_inside_the_container. It seems that even I have added files in /mnt/mountpoint_at_host on my host, changes are not reflected within the container (i.e. ls -al /mnt/some_mountpoint_inside_the_container inside the container returns nothing). Only AFTER I have un-mounted /mnt/some_mountpoint_inside_the_container within the container, the created files on the host can now be found on the container.
I have also tried to bind mount a parent folder:
...
volumes:
- /mnt/mountpoint_at_host:/mnt/parent_folder
...
Then I created a folder on my host: mkdir -p /mnt/mountpoint_at_host/the_real_mntpt.
I have then again, FUSE mounted the custom FS in the docker container on:
/mnt/parent_folder/the_real_mntpt.
But still, changes on the host are not reflected on the container side, or the underlining block device.
Is there any way I can access to the mount point that are mounted within the container from the host? I have thought of methods like creating NFS service within the container after I have FUSE mounted the FS, and then exposing the NFS port to the host. But it seems a bit inefficient.
EDIT: I am using Ubuntu with docker.io/docker-compose from apt-get. The container itself is a CentOS 8.
Related
Assume that i have an application with this simple Dockerfile:
//...
RUN configure.sh --logmyfiles /var/lib/myapp
ENTRYPOINT ["starter.sh"]
CMD ["run"]
EXPOSE 8080
VOLUME ["/var/lib/myapp"]
And I run a container from that:
sudo docker run -d --name myapp -p 8080:8080 myapp:latest
So it works properly and stores some logs in /var/lib/myapp of docker container.
My question
I need these log files to automatically saved in host too, So how can i mount the /var/lib/myapp from the container to the /var/lib/myapp in host server (without removing current container) ?
Edit
I also see Docker - Mount Directory From Container to Host, but it doesn't solve my problem i need a way to backup my files from docker to host.
First, a little information about Docker volumes. Volume mounts occur only at container creation time. That means you cannot change volume mounts after you've started the container. Also, volume mounts are one-way only: From the host to the container, and not vice-versa. When you specify a host directory mounted as a volume in your container (for example something like: docker run -d --name="foo" -v "/path/on/host:/path/on/container" ubuntu), it is a "regular ole" linux mount --bind, which means that the host directory will temporarily "override" the container directory. Nothing is actually deleted or overwritten on the destination directory, but because of the nature of containers, that effectively means it will be overridden for the lifetime of the container.
So, you're left with two options (maybe three). You could mount a host directory into your container and then copy those files in your startup script (or if you bring cron into your container, you could use a cron to periodically copy those files to that host directory volume mount).
You could also use docker cp to move files from your container to your host. Now that is kinda hacky and definitely not something you should use in your infrastructure automation. But it does work very well for that exact purpose. One-off or debugging is a great situation for that.
You could also possibly set up a network transfer, but that's pretty involved for what you're doing. However, if you want to do this regularly for your log files (or whatever), you could look into using something like rsyslog to move those files off your container.
So how can i mount the /var/lib/myapp from the container to the /var/lib/myapp in host server
That is the opposite: you can mount an host folder to your container on docker run.
(without removing current container)
I don't think so.
Right now, you can check docker inspect <containername> and see if you see your log in the /var/lib/docker/volumes/... associated to the volume from your container.
Or you can redirect the result of docker logs <containername> to an host file.
For more example, see this gist.
The alternative would be to mount a host directory as the log folder and then access the log files directly on the host.
me#host~$ docker run -d -p 80:80 -v <sites-enabled-dir>:/etc/nginx/sites-enabled -v <certs-dir>:/etc/nginx/certs -v <log-dir>:/var/log/nginx dockerfile/nginx
me#host~$ ls <log-dir>
(again, that apply to a container that you start, not an existing running one)
I have two ubuntu server VMs running on the same proxmox server. Both are running docker. I want to migrate one container from one of the VMs to the other. For that I need to attach a USB drive to the target VM which will be mounted inside the docker container. I mounted the drive exactly the same way in both VMs (the old one is shut down of course) and the mounting works, I can access the directory and see the contents of the drive. Now I want to run the container with the exact same command as I used on the old vm which looks something like this:
docker run -d --restart unless-stopped --stop-timeout 300 -p 8081:8081 --mount type=bind,source="/data",destination=/internal_data
This works in the old VM, but on the new one it says:
docker: Error response from daemon: invalid mount config for type "bind": bind source path does not exist: /data.
See 'docker run --help'.
I don't understand what's wrong. /data exists and is owned by root, the same as it is on the old VM. In fact, it's the same drive with the same contents. If I shut down the new VM and boot up the old one with the drive mounted in exactly the same way, it just works.
What can cause this error, if the source path does in fact exist?
I fixed it by mounting the drive in a mount point in /mnt/.
I changed nothing else and in the other VM it works when mounting on the root with the same user and permissions. No idea why that fixed it.
I'm trying to make a Docker image that logs onto a Kerio VPN and then mounts a remote samba directory onto /mnt.
The mounting is done using mount -t cifs -o username=USER,password=PWD //ABC/randomDirectory /mnt and it succeeds. When I list the contents of /mnt from the container itself I can see all the files and directories on the remote server, but when I list the host directory that has been mounted on the container when starting it (-v /absolute/path/to/mountpoint:/mnt), it comes up empty.
I tried adding a simple touch /mnt/test on the start of the ENTRYPOINT script, and that creates a file in /absolute/path/to/mountpoint and is even there when I list it from inside the container. Once I mount the CIFS, listing from inside the container provides all the files and directories on the remote, and listing from the host shows only the created test file.
It seems like the mount command inside the container "detaches" the docker volume.
EDIT: mounting to a subdirectory in the mounted volume doesn't work either
The volume can be specified to mount as shared by -v /local/path:/mnt:shared
I have a docker container, which is hosting jupyter notebook server on my PC, that has mounted directory from local host. Let's call this directory /docker-mount.
Next, I created new directory under the directory /docker-mount, like /docker-mount/files, and then I mounted some cifs based storage from other PC's file system on the /docker-mount/files directory.
I expected for docker container's file system to be available to use this network mount, but it's only available with locally created directory files, but not all contents that are mounted inside the files.
I assume this is how linux file system works, but still not confident of that idea.
Is there any way to make this possible?
I suggest that you mount your cifs shared drive as a Docker volume instead. Relying on a shared drive with your host computer is not reliable in my experience, specially with respect to file changes being reflected in the Docker world. Beside, your production environment won't have this shared drive with your development host.
create a docker volume using the Netshare cifs driver.
http://netshare.containx.io/docs/cifs#creating-a-volume-with-docker-volume
Then mount your volume normally on any container that requires access to the cifs drive.
If I launch a docker container with
docker run -v /foo:/foo ...
I can see the contents of /foo on the host, from within the container.
While the docker container is running, if I run
mount -t ext4 /dev/... /foo/something
I will NOT see the new mount point in /foo inside the container - is there any way to make it show up? (if I launch the docker container AFTER the mount point on the host is established, it is ok).
Docker containers run in a private mount namespace, which means that mounts made on the host after the container starts do not propagate into the container. The kernel documentation on shared subtrees goes into detail about mount propagation and private vs shared vs slave mounts.
The short answer to your question is that there isn't an easy way to expose a new mount like this into a container. It's possible, probably involving the use of nsenter to run commands inside the container namespace to change the flags on the mounts, but I wouldn't go there.
In general, if you need to change the storage configuration of a container, you re-deploy the container.