I'm trying to make a Docker image that logs onto a Kerio VPN and then mounts a remote samba directory onto /mnt.
The mounting is done using mount -t cifs -o username=USER,password=PWD //ABC/randomDirectory /mnt and it succeeds. When I list the contents of /mnt from the container itself I can see all the files and directories on the remote server, but when I list the host directory that has been mounted on the container when starting it (-v /absolute/path/to/mountpoint:/mnt), it comes up empty.
I tried adding a simple touch /mnt/test on the start of the ENTRYPOINT script, and that creates a file in /absolute/path/to/mountpoint and is even there when I list it from inside the container. Once I mount the CIFS, listing from inside the container provides all the files and directories on the remote, and listing from the host shows only the created test file.
It seems like the mount command inside the container "detaches" the docker volume.
EDIT: mounting to a subdirectory in the mounted volume doesn't work either
The volume can be specified to mount as shared by -v /local/path:/mnt:shared
Related
Assume that i have an application with this simple Dockerfile:
//...
RUN configure.sh --logmyfiles /var/lib/myapp
ENTRYPOINT ["starter.sh"]
CMD ["run"]
EXPOSE 8080
VOLUME ["/var/lib/myapp"]
And I run a container from that:
sudo docker run -d --name myapp -p 8080:8080 myapp:latest
So it works properly and stores some logs in /var/lib/myapp of docker container.
My question
I need these log files to automatically saved in host too, So how can i mount the /var/lib/myapp from the container to the /var/lib/myapp in host server (without removing current container) ?
Edit
I also see Docker - Mount Directory From Container to Host, but it doesn't solve my problem i need a way to backup my files from docker to host.
First, a little information about Docker volumes. Volume mounts occur only at container creation time. That means you cannot change volume mounts after you've started the container. Also, volume mounts are one-way only: From the host to the container, and not vice-versa. When you specify a host directory mounted as a volume in your container (for example something like: docker run -d --name="foo" -v "/path/on/host:/path/on/container" ubuntu), it is a "regular ole" linux mount --bind, which means that the host directory will temporarily "override" the container directory. Nothing is actually deleted or overwritten on the destination directory, but because of the nature of containers, that effectively means it will be overridden for the lifetime of the container.
So, you're left with two options (maybe three). You could mount a host directory into your container and then copy those files in your startup script (or if you bring cron into your container, you could use a cron to periodically copy those files to that host directory volume mount).
You could also use docker cp to move files from your container to your host. Now that is kinda hacky and definitely not something you should use in your infrastructure automation. But it does work very well for that exact purpose. One-off or debugging is a great situation for that.
You could also possibly set up a network transfer, but that's pretty involved for what you're doing. However, if you want to do this regularly for your log files (or whatever), you could look into using something like rsyslog to move those files off your container.
So how can i mount the /var/lib/myapp from the container to the /var/lib/myapp in host server
That is the opposite: you can mount an host folder to your container on docker run.
(without removing current container)
I don't think so.
Right now, you can check docker inspect <containername> and see if you see your log in the /var/lib/docker/volumes/... associated to the volume from your container.
Or you can redirect the result of docker logs <containername> to an host file.
For more example, see this gist.
The alternative would be to mount a host directory as the log folder and then access the log files directly on the host.
me#host~$ docker run -d -p 80:80 -v <sites-enabled-dir>:/etc/nginx/sites-enabled -v <certs-dir>:/etc/nginx/certs -v <log-dir>:/var/log/nginx dockerfile/nginx
me#host~$ ls <log-dir>
(again, that apply to a container that you start, not an existing running one)
I have a privileged docker container that will mount a custom File System with FUSE.
It is achieved by bind mounting /dev,/sys from the host to the container and running some custom software which accessed block device e.g. /dev/sdX inside the container to mount a custom FS on some mount point, let's say /mnt/some_mountpoint_inside_the_container (everything still happens inside the container).
Now, I would like to access this mount point that are mounted inside the docker container from the host but with no avail. So far, I have tried:
In my docker-compose.yaml, I defined a (binded) volume from host to container, e.g.:
...
volumes:
- /mnt/mountpoint_at_host:/mnt/some_mountpoint_inside_the_container
...
Then, I have FUSE mounted the custom FS inside the container on /mnt/some_mountpoint_inside_the_container. It seems that even I have added files in /mnt/mountpoint_at_host on my host, changes are not reflected within the container (i.e. ls -al /mnt/some_mountpoint_inside_the_container inside the container returns nothing). Only AFTER I have un-mounted /mnt/some_mountpoint_inside_the_container within the container, the created files on the host can now be found on the container.
I have also tried to bind mount a parent folder:
...
volumes:
- /mnt/mountpoint_at_host:/mnt/parent_folder
...
Then I created a folder on my host: mkdir -p /mnt/mountpoint_at_host/the_real_mntpt.
I have then again, FUSE mounted the custom FS in the docker container on:
/mnt/parent_folder/the_real_mntpt.
But still, changes on the host are not reflected on the container side, or the underlining block device.
Is there any way I can access to the mount point that are mounted within the container from the host? I have thought of methods like creating NFS service within the container after I have FUSE mounted the FS, and then exposing the NFS port to the host. But it seems a bit inefficient.
EDIT: I am using Ubuntu with docker.io/docker-compose from apt-get. The container itself is a CentOS 8.
I have two ubuntu server VMs running on the same proxmox server. Both are running docker. I want to migrate one container from one of the VMs to the other. For that I need to attach a USB drive to the target VM which will be mounted inside the docker container. I mounted the drive exactly the same way in both VMs (the old one is shut down of course) and the mounting works, I can access the directory and see the contents of the drive. Now I want to run the container with the exact same command as I used on the old vm which looks something like this:
docker run -d --restart unless-stopped --stop-timeout 300 -p 8081:8081 --mount type=bind,source="/data",destination=/internal_data
This works in the old VM, but on the new one it says:
docker: Error response from daemon: invalid mount config for type "bind": bind source path does not exist: /data.
See 'docker run --help'.
I don't understand what's wrong. /data exists and is owned by root, the same as it is on the old VM. In fact, it's the same drive with the same contents. If I shut down the new VM and boot up the old one with the drive mounted in exactly the same way, it just works.
What can cause this error, if the source path does in fact exist?
I fixed it by mounting the drive in a mount point in /mnt/.
I changed nothing else and in the other VM it works when mounting on the root with the same user and permissions. No idea why that fixed it.
I have a docker container, which is hosting jupyter notebook server on my PC, that has mounted directory from local host. Let's call this directory /docker-mount.
Next, I created new directory under the directory /docker-mount, like /docker-mount/files, and then I mounted some cifs based storage from other PC's file system on the /docker-mount/files directory.
I expected for docker container's file system to be available to use this network mount, but it's only available with locally created directory files, but not all contents that are mounted inside the files.
I assume this is how linux file system works, but still not confident of that idea.
Is there any way to make this possible?
I suggest that you mount your cifs shared drive as a Docker volume instead. Relying on a shared drive with your host computer is not reliable in my experience, specially with respect to file changes being reflected in the Docker world. Beside, your production environment won't have this shared drive with your development host.
create a docker volume using the Netshare cifs driver.
http://netshare.containx.io/docs/cifs#creating-a-volume-with-docker-volume
Then mount your volume normally on any container that requires access to the cifs drive.
I have a web application running in a Docker container. This application needs to access some files on our corporate file server (Windows Server with an Active Directory domain controller). The files I'm trying to access are image files created for our clients and the web application displays them as part of the client's portfolio.
On my development machine I have the appropriate folders mounted via entries in /etc/fstab and the host mount points are mounted in the Docker container via the --volume argument. This works perfectly.
Now I'm trying to put together a production container which will be run on a different server and which doesn't rely on the CIFS share being mounted on the host. So I tried to add the appropriate entries to the /etc/fstab file in the container & mounting them with mount -a. I get mount error(13): Permission denied.
A little research online led me to this article about Docker security. If I'm reading this correctly, it appears that Docker explicitly denies the ability to mount filesystems within a container. I tried mounting the shares read-only, but this (unsurprisingly) also failed.
So, I have two questions:
Am I correct in understanding that Docker prevents any use of mount inside containers?
Can anyone think of another way to accomplish this without mounting a CIFS share on the host and then mounting the host folder in the Docker container?
Yes, Docker is preventing you from mounting a remote volume inside the container as a security measure. If you trust your images and the people who run them, then you can use the --privileged flag with docker run to disable these security measures.
Further, you can combine --cap-add and --cap-drop to give the container only the capabilities that it actually needs. (See documentation) The SYS_ADMIN capability is the one that grants mount privileges.
yes
There is a closed issue mount.cifs within a container
https://github.com/docker/docker/issues/22197
according to which adding
--cap-add SYS_ADMIN --cap-add DAC_READ_SEARCH
to the run options will make mount -t cifs operational.
I tried it out and:
mount -t cifs //<host>/<path> /<localpath> -o user=<user>,password=<user>
within the container then works
You could use the smbclient command (part of the Samba package) to access the SMB/CIFS server from within the Docker container without mounting it, in the same way that you might use curl to download or upload a file.
There is a question on StackExchange Unix that deals with this, but in short:
smbclient //server/share -c 'cd /path/to/file; put myfile'
For multiple files there is the -T option which can create or extract .tar archives, however this looks like it would be a two step process (one to create the .tar and then another to extract it locally). I'm not sure whether you could use a pipe to do it in one step.
You can use a Netshare docker volume plugin which allows to mount remote CIFS/Samba as volumes.
Do not make your containers less secure by exposing many ports just to mount a share. Or by running it as --privileged
Here is how I solved this issue:
First mount the volume on the server that runs docker.
sudo mount -t cifs -o username=YourUserName,uid=$(id -u),gid=$(id -g) //SERVER/share ~/WinShare
Change the username, SERVER and WinShare here. This will ask your sudo password, then it will ask password for the remote share.
Let's assume you created WinShare folder inside your home folder. After running this command you should be able to see all the shared folders and files in WinShare folder. In addition to that since you use the uidand gid tags you will have write access without using sudo all the time.
Now you can run your container by using -v tag and share a volume between the server and the container.
Let's say you ran it like the following.
docker run -d --name mycontainer -v /home/WinShare:/home 2d244422164
You should be able to access the windows share and modify it from your container now.
To test it just do:
docker exec -it yourRunningContainer /bin/bash
cd /Home
touch testdocfromcontainer.txt
You should see testdocfromcontainer.txt in the windows share.