Restrict access of other linux user to docker container - linux

I have two linux users, named as: ubuntu and my_user
Now I build a simple Docker image and also run the Docker container
In my docker-compose.yml, I volume mount some of the files from local machine to the container, which were created by 'ubuntu' user.
Now if I login by 'my_user', and access the docker container created by 'ubuntu' user using docker exec command, then I am able to access any files that are present in the container.
My requirement is to restrict the access of 'my_user', to access the content of Docker container that was created by 'ubuntu' user.

This is not possible to achieve currently. If your user can execute Docker commands, it means effectively that the user has root privileges, therefore it's impossible to prevent this user from accessing any files.

You can add "ro",means readOnly after the data volumn.Like this
HOST:CONTAINER:ro
Or you can add ReadOnly properties in your docker-compose.yml
Here is an example how to specify read-only containers in docker-compose:

#surabhi, There is only option to restrict file access by adding fields in docker-compose file.
read_only: flag to set the volume as read-only
nocopy: flag to disable copying of data from a container when a volume is created
You can find more information here

You could install and run a sshd in that container, map port 22 to an available host port and manage the user accessibility via ssh keys.
This would not allow the user to manage things via docker commands but would give that user access to that container.

Related

How to prevent other users from accessing APIs hosted in docker container?

I have a docker container that hosts REST APIs.
As root user I am able to access with its internal IP from the host machine, like below
#docker inspect -f '{{range.NetworkSettings.Networks}}{{.IPAddress}}{{end}}' mycontainer
172.20.0.7
#curl -X GET http://172.20.0.7:8080/v1/api
Welcome!!
I can do the same as a non-privileged user too! (Permission denied! If I try to execute docker commands as this user).
Is there a way, I can prevent non-privileged users from accessing the container APIs?
I guess you could create a network interface which you'll be allow the usage by only your root user by using iptable then cast your service only on this network interface (-p <your_new_ip>:port:port)

Linux user groups missing when user mounted to container

Issue: The groups attached to a Linux user are not visible inside the container.
Workflow:
Created a docker image, in which a user and group named sample:sample(8000:8000) is created.
Created a container using the same docker image and mounted the /etc/passwd file with readOnly access.
Command: docker run -itd --user "8000:8000" -v /etc/passwd:/etc/passwd:ro docker_image_name:latest bash
Note: The user & group sample:sample(8000:8000) also exists on the host.
The groups attached with sample user are sample and docker as checked on the host using the groups command.
Execed into the container and fired the following commands,
Command 1: whoami
Output: sample
Command 2: id -u
Output: 8000
Command 3: id -g
Output: 8000
Command 4: groups
Output: sample
Observations:
As we can see, within the container the groups attached to sample user is only sample and docker is missing.
Expected Behaviour:
As the sample user is present on host as well as the container, I want the groups associated with the host user inside the container as well, i.e., I want the host user details to override the ones present in the container.
The issue lies in the way Docker loads the user and group information.
Issues are already reported to Docker as it fails to load the additional groups information which is stored in /etc/groups file, so, even if we mount the /etc/groups file Docker doesn't honor it.
Hence, the solution is to associate the required groups using the --group-add option provided by docker.
Note: This group provided must be a valid group and it will then be associated to your user with the already existing groups.

ownership of files that are written by the docker container to mounted volume

I'm using non-root user on a secured env to run stock DB docker container (elasticsearch). Of course - I want the data to be mounted so I won't lose it when the container is destroyed.
The problem is that this container writes to that volume with root ownership, and then the host doesn't have permissions to move/rm them.
I know that most docker images use root user from inside, but how can I control the file ownership of the hosting machine?
You can create a data container docker create -v /usr/share/elasticsearch/data --name esdata elasticsearch /bin/true, then use it in your container docker run -d --volumes-from esdata --name some-elasticsearch elasticsearch.
This is a prefer data pattern for docker, you can find out more in this docker page.
To answer you question use "docker run --user '$(id -u)' ..." it will run program within container with current user id, then you might have the same question as I did.
I answered it in some way I hope it might be useful.
Docker with '--user' can not write to volume with different ownership

Managing directory permissions across the host and Docker container environments

I'm trying to use a stack built with Docker container to run a Symfony2 application (SfDocker). The stack consists of interlinked containers where ubuntu:14.04 is a base:
mysql db
nginx
php-fpm
The recurring problem that I'm facing is managing directory permission inside the container. When I mount a vloume from the host, e.g.
volumes:
- symfony-code:/var/www/app
The mounted directories will always be owned by root or an unidentified user (only user ID visible when running ls -al) inside the container.
This, essentially, makes it impossible to access the application through the browser. Of course running chown -R root:www-data on public directories solves the problem, but as soon as I want to write to e.g. 'cache' directory as from the host (where the user is ltarasiewicz) I'd get permission denied error. On top of that, whenever an application running inside a container creates new directories (e.h. 'logs'), they again are owned byroot and later inaccessible by the browser or my desktop user.
So my question are:
How I should manage permission accross the host and container
environments (when I want to run commands on the container from both
environments) ?
Is it possible to configure Docker so that directories mounted as volumes receive specific ownership/permissions (e.g. 'root:www-data') automatically?
Am I free to create new users and user groups inside my 'nginx' container built from the Ubuntu:14.04 image ?
A few general points, apologies if I don't answer your questions directly.
Don't run as root in the container. Create a user in the Dockerfile and switch to it, either with the USER statement or in an entrypoint or command script. See the Redis official image for a good example of this. (So the answer to Q3 is yes, and do, but via a Dockerfile - don't make changes to containers by hand).
Note that the official images often do a chown on volumes in the entrypoint script to avoid this issue you describe in 2.
Consider using a data container rather than linking directly to host directories. See the official docs for more information.
Don't run commands from the host on the volumes. Just create a temporary container to do it or use docker exec (e.g. docker run -v /myvol:/myvol myimage touch /myvol/x).

Mount SMB/CIFS share within a Docker container

I have a web application running in a Docker container. This application needs to access some files on our corporate file server (Windows Server with an Active Directory domain controller). The files I'm trying to access are image files created for our clients and the web application displays them as part of the client's portfolio.
On my development machine I have the appropriate folders mounted via entries in /etc/fstab and the host mount points are mounted in the Docker container via the --volume argument. This works perfectly.
Now I'm trying to put together a production container which will be run on a different server and which doesn't rely on the CIFS share being mounted on the host. So I tried to add the appropriate entries to the /etc/fstab file in the container & mounting them with mount -a. I get mount error(13): Permission denied.
A little research online led me to this article about Docker security. If I'm reading this correctly, it appears that Docker explicitly denies the ability to mount filesystems within a container. I tried mounting the shares read-only, but this (unsurprisingly) also failed.
So, I have two questions:
Am I correct in understanding that Docker prevents any use of mount inside containers?
Can anyone think of another way to accomplish this without mounting a CIFS share on the host and then mounting the host folder in the Docker container?
Yes, Docker is preventing you from mounting a remote volume inside the container as a security measure. If you trust your images and the people who run them, then you can use the --privileged flag with docker run to disable these security measures.
Further, you can combine --cap-add and --cap-drop to give the container only the capabilities that it actually needs. (See documentation) The SYS_ADMIN capability is the one that grants mount privileges.
yes
There is a closed issue mount.cifs within a container
https://github.com/docker/docker/issues/22197
according to which adding
--cap-add SYS_ADMIN --cap-add DAC_READ_SEARCH
to the run options will make mount -t cifs operational.
I tried it out and:
mount -t cifs //<host>/<path> /<localpath> -o user=<user>,password=<user>
within the container then works
You could use the smbclient command (part of the Samba package) to access the SMB/CIFS server from within the Docker container without mounting it, in the same way that you might use curl to download or upload a file.
There is a question on StackExchange Unix that deals with this, but in short:
smbclient //server/share -c 'cd /path/to/file; put myfile'
For multiple files there is the -T option which can create or extract .tar archives, however this looks like it would be a two step process (one to create the .tar and then another to extract it locally). I'm not sure whether you could use a pipe to do it in one step.
You can use a Netshare docker volume plugin which allows to mount remote CIFS/Samba as volumes.
Do not make your containers less secure by exposing many ports just to mount a share. Or by running it as --privileged
Here is how I solved this issue:
First mount the volume on the server that runs docker.
sudo mount -t cifs -o username=YourUserName,uid=$(id -u),gid=$(id -g) //SERVER/share ~/WinShare
Change the username, SERVER and WinShare here. This will ask your sudo password, then it will ask password for the remote share.
Let's assume you created WinShare folder inside your home folder. After running this command you should be able to see all the shared folders and files in WinShare folder. In addition to that since you use the uidand gid tags you will have write access without using sudo all the time.
Now you can run your container by using -v tag and share a volume between the server and the container.
Let's say you ran it like the following.
docker run -d --name mycontainer -v /home/WinShare:/home 2d244422164
You should be able to access the windows share and modify it from your container now.
To test it just do:
docker exec -it yourRunningContainer /bin/bash
cd /Home
touch testdocfromcontainer.txt
You should see testdocfromcontainer.txt in the windows share.

Resources