Managing directory permissions across the host and Docker container environments - linux

I'm trying to use a stack built with Docker container to run a Symfony2 application (SfDocker). The stack consists of interlinked containers where ubuntu:14.04 is a base:
mysql db
nginx
php-fpm
The recurring problem that I'm facing is managing directory permission inside the container. When I mount a vloume from the host, e.g.
volumes:
- symfony-code:/var/www/app
The mounted directories will always be owned by root or an unidentified user (only user ID visible when running ls -al) inside the container.
This, essentially, makes it impossible to access the application through the browser. Of course running chown -R root:www-data on public directories solves the problem, but as soon as I want to write to e.g. 'cache' directory as from the host (where the user is ltarasiewicz) I'd get permission denied error. On top of that, whenever an application running inside a container creates new directories (e.h. 'logs'), they again are owned byroot and later inaccessible by the browser or my desktop user.
So my question are:
How I should manage permission accross the host and container
environments (when I want to run commands on the container from both
environments) ?
Is it possible to configure Docker so that directories mounted as volumes receive specific ownership/permissions (e.g. 'root:www-data') automatically?
Am I free to create new users and user groups inside my 'nginx' container built from the Ubuntu:14.04 image ?

A few general points, apologies if I don't answer your questions directly.
Don't run as root in the container. Create a user in the Dockerfile and switch to it, either with the USER statement or in an entrypoint or command script. See the Redis official image for a good example of this. (So the answer to Q3 is yes, and do, but via a Dockerfile - don't make changes to containers by hand).
Note that the official images often do a chown on volumes in the entrypoint script to avoid this issue you describe in 2.
Consider using a data container rather than linking directly to host directories. See the official docs for more information.
Don't run commands from the host on the volumes. Just create a temporary container to do it or use docker exec (e.g. docker run -v /myvol:/myvol myimage touch /myvol/x).

Related

mounting volume from inside the container in local directory [duplicate]

Assume that i have an application with this simple Dockerfile:
//...
RUN configure.sh --logmyfiles /var/lib/myapp
ENTRYPOINT ["starter.sh"]
CMD ["run"]
EXPOSE 8080
VOLUME ["/var/lib/myapp"]
And I run a container from that:
sudo docker run -d --name myapp -p 8080:8080 myapp:latest
So it works properly and stores some logs in /var/lib/myapp of docker container.
My question
I need these log files to automatically saved in host too, So how can i mount the /var/lib/myapp from the container to the /var/lib/myapp in host server (without removing current container) ?
Edit
I also see Docker - Mount Directory From Container to Host, but it doesn't solve my problem i need a way to backup my files from docker to host.
First, a little information about Docker volumes. Volume mounts occur only at container creation time. That means you cannot change volume mounts after you've started the container. Also, volume mounts are one-way only: From the host to the container, and not vice-versa. When you specify a host directory mounted as a volume in your container (for example something like: docker run -d --name="foo" -v "/path/on/host:/path/on/container" ubuntu), it is a "regular ole" linux mount --bind, which means that the host directory will temporarily "override" the container directory. Nothing is actually deleted or overwritten on the destination directory, but because of the nature of containers, that effectively means it will be overridden for the lifetime of the container.
So, you're left with two options (maybe three). You could mount a host directory into your container and then copy those files in your startup script (or if you bring cron into your container, you could use a cron to periodically copy those files to that host directory volume mount).
You could also use docker cp to move files from your container to your host. Now that is kinda hacky and definitely not something you should use in your infrastructure automation. But it does work very well for that exact purpose. One-off or debugging is a great situation for that.
You could also possibly set up a network transfer, but that's pretty involved for what you're doing. However, if you want to do this regularly for your log files (or whatever), you could look into using something like rsyslog to move those files off your container.
So how can i mount the /var/lib/myapp from the container to the /var/lib/myapp in host server
That is the opposite: you can mount an host folder to your container on docker run.
(without removing current container)
I don't think so.
Right now, you can check docker inspect <containername> and see if you see your log in the /var/lib/docker/volumes/... associated to the volume from your container.
Or you can redirect the result of docker logs <containername> to an host file.
For more example, see this gist.
The alternative would be to mount a host directory as the log folder and then access the log files directly on the host.
me#host~$ docker run -d -p 80:80 -v <sites-enabled-dir>:/etc/nginx/sites-enabled -v <certs-dir>:/etc/nginx/certs -v <log-dir>:/var/log/nginx dockerfile/nginx
me#host~$ ls <log-dir>
(again, that apply to a container that you start, not an existing running one)

how to access the file system of my local PC from within the docker container

I want to start the following docker container and have terminal access to it:
docker run -it docker:5000/builds/build-lnx64-centos7:latest /bin/bash
The problem is that inside the terminal I can not find any of the files in my file system. No ~/Desktop and similar directories.
Question: how to access the file system of my local PC from within the docker container?
By default, containers cannot see the file system of their host.
If you want to achieve this, you will have to explicitly "mount" whatever directories you want to see using the -v flag, like this:
docker run -v ~/Desktop:/host-desktop -it docker:5000/builds/build-lnx64-centos7:latest /bin/bash
If you run that command, you will see the contents of your desktop in the container's file system, at /host-desktop.
You really would not want your container's to be able to see the entire host file system. That would be dangerous, especially if the container has write permission. You should always only "mount" the exact files/directories you want the container to access.
For the most part, any project I have worked on that uses docker does "volume mounting" so that the container can write files and the developer can easily access them on the host (e.g. selenium tests taking screenshots) or so the developer can edit source code and the container will see the update and hot-reload (e.g. nodejs development). When doing the latter (hot-reload example), it is usually wise to mount in read-only mode.
See the docs for more details: https://docs.docker.com/engine/reference/commandline/run/#mount-volume--v---read-only

how to rsync from a docker container to my host computer

My current development environment allows for automatic code reload whenever changing a file (i.e nodemon / webpack). However I am setting up a kubernetes (minikube) environment so that I can quickly open 3-4 related services at once.
Everything is working fine, but it is not currently doing the automatic code reload. I tried mounting the volume but there is some conflict with the way docker and virtualbox handles files such that the conflict leads to file changes from the host not reflected in docker container. (That's not the first link I have that appears related to this problem, it's just the first I found while googling it on another day)...
Anyways, long story short, ppl have trouble getting live reload done in development. I've found the problem literred throughout the interweb with very few solutions. The best solution I would say I found so far is This person used tar from the host to sync folders.
However I would like a solution from the container. The reason is that I want to run the script from the container so that the developer doesn't have to run some script on his host computer every time he starts developing in a particular repo.
In order to do this however I need to run rsync from the container to the host machine. And I'm having a surprising lot of trouble figuring out how to write the syntax for that.
Let's pretend my app exists in my container and host respectively as:
/workspace/app # location in container
/Users/terence/workspace/app # location in host computer
How do I rsync from the container to the host? I've tried using the 172.17.0.17 and 127.0.0.1 to no avail. Not entirely sure if there is a way to do it?
examples I tried:
rsync -av 172.17.0.17:Users/terence/workspace/app /workspace/app
rsync -av 127.0.0.1:Users/terence/workspace/app /workspace/app
If you're running the rsync from the host (not inside the container), you could use docker cp instead:
e.g., docker cp containerName:/workspace/app Users/terence/workspace/app
Could you clarify:
1. are you running the rsync from the host or from inside the container?
If it's from inside the container it'll depend a lot on the --network the container is attached to (i.e., bridged or host) and also the mounted volumes (i.e., when you started up the container did you use -v flag?)
Update: For rsync to work from within the container you need to expose the host's dir to the container.
As you think of a solution, keep this in mind: host dir as a data volume
Note: The host directory is, by its nature, host-dependent. For this reason, you can’t mount a host directory from Dockerfile, the VOLUME instruction does not support passing a host-dir, because built images should be portable. A host directory wouldn’t be available on all potential hosts.

ownership of files that are written by the docker container to mounted volume

I'm using non-root user on a secured env to run stock DB docker container (elasticsearch). Of course - I want the data to be mounted so I won't lose it when the container is destroyed.
The problem is that this container writes to that volume with root ownership, and then the host doesn't have permissions to move/rm them.
I know that most docker images use root user from inside, but how can I control the file ownership of the hosting machine?
You can create a data container docker create -v /usr/share/elasticsearch/data --name esdata elasticsearch /bin/true, then use it in your container docker run -d --volumes-from esdata --name some-elasticsearch elasticsearch.
This is a prefer data pattern for docker, you can find out more in this docker page.
To answer you question use "docker run --user '$(id -u)' ..." it will run program within container with current user id, then you might have the same question as I did.
I answered it in some way I hope it might be useful.
Docker with '--user' can not write to volume with different ownership

Mount SMB/CIFS share within a Docker container

I have a web application running in a Docker container. This application needs to access some files on our corporate file server (Windows Server with an Active Directory domain controller). The files I'm trying to access are image files created for our clients and the web application displays them as part of the client's portfolio.
On my development machine I have the appropriate folders mounted via entries in /etc/fstab and the host mount points are mounted in the Docker container via the --volume argument. This works perfectly.
Now I'm trying to put together a production container which will be run on a different server and which doesn't rely on the CIFS share being mounted on the host. So I tried to add the appropriate entries to the /etc/fstab file in the container & mounting them with mount -a. I get mount error(13): Permission denied.
A little research online led me to this article about Docker security. If I'm reading this correctly, it appears that Docker explicitly denies the ability to mount filesystems within a container. I tried mounting the shares read-only, but this (unsurprisingly) also failed.
So, I have two questions:
Am I correct in understanding that Docker prevents any use of mount inside containers?
Can anyone think of another way to accomplish this without mounting a CIFS share on the host and then mounting the host folder in the Docker container?
Yes, Docker is preventing you from mounting a remote volume inside the container as a security measure. If you trust your images and the people who run them, then you can use the --privileged flag with docker run to disable these security measures.
Further, you can combine --cap-add and --cap-drop to give the container only the capabilities that it actually needs. (See documentation) The SYS_ADMIN capability is the one that grants mount privileges.
yes
There is a closed issue mount.cifs within a container
https://github.com/docker/docker/issues/22197
according to which adding
--cap-add SYS_ADMIN --cap-add DAC_READ_SEARCH
to the run options will make mount -t cifs operational.
I tried it out and:
mount -t cifs //<host>/<path> /<localpath> -o user=<user>,password=<user>
within the container then works
You could use the smbclient command (part of the Samba package) to access the SMB/CIFS server from within the Docker container without mounting it, in the same way that you might use curl to download or upload a file.
There is a question on StackExchange Unix that deals with this, but in short:
smbclient //server/share -c 'cd /path/to/file; put myfile'
For multiple files there is the -T option which can create or extract .tar archives, however this looks like it would be a two step process (one to create the .tar and then another to extract it locally). I'm not sure whether you could use a pipe to do it in one step.
You can use a Netshare docker volume plugin which allows to mount remote CIFS/Samba as volumes.
Do not make your containers less secure by exposing many ports just to mount a share. Or by running it as --privileged
Here is how I solved this issue:
First mount the volume on the server that runs docker.
sudo mount -t cifs -o username=YourUserName,uid=$(id -u),gid=$(id -g) //SERVER/share ~/WinShare
Change the username, SERVER and WinShare here. This will ask your sudo password, then it will ask password for the remote share.
Let's assume you created WinShare folder inside your home folder. After running this command you should be able to see all the shared folders and files in WinShare folder. In addition to that since you use the uidand gid tags you will have write access without using sudo all the time.
Now you can run your container by using -v tag and share a volume between the server and the container.
Let's say you ran it like the following.
docker run -d --name mycontainer -v /home/WinShare:/home 2d244422164
You should be able to access the windows share and modify it from your container now.
To test it just do:
docker exec -it yourRunningContainer /bin/bash
cd /Home
touch testdocfromcontainer.txt
You should see testdocfromcontainer.txt in the windows share.

Resources