Issue: The groups attached to a Linux user are not visible inside the container.
Workflow:
Created a docker image, in which a user and group named sample:sample(8000:8000) is created.
Created a container using the same docker image and mounted the /etc/passwd file with readOnly access.
Command: docker run -itd --user "8000:8000" -v /etc/passwd:/etc/passwd:ro docker_image_name:latest bash
Note: The user & group sample:sample(8000:8000) also exists on the host.
The groups attached with sample user are sample and docker as checked on the host using the groups command.
Execed into the container and fired the following commands,
Command 1: whoami
Output: sample
Command 2: id -u
Output: 8000
Command 3: id -g
Output: 8000
Command 4: groups
Output: sample
Observations:
As we can see, within the container the groups attached to sample user is only sample and docker is missing.
Expected Behaviour:
As the sample user is present on host as well as the container, I want the groups associated with the host user inside the container as well, i.e., I want the host user details to override the ones present in the container.
The issue lies in the way Docker loads the user and group information.
Issues are already reported to Docker as it fails to load the additional groups information which is stored in /etc/groups file, so, even if we mount the /etc/groups file Docker doesn't honor it.
Hence, the solution is to associate the required groups using the --group-add option provided by docker.
Note: This group provided must be a valid group and it will then be associated to your user with the already existing groups.
Related
I need to run a docker container with an application using rpio package.
I do not understand this part:
By default the module will use /dev/gpiomem when using simple GPIO access. To access this device, your user will need to be a member of
the gpio group, and you may need to configure udev with the following
rule (as root):
$ cat >/etc/udev/rules.d/20-gpiomem.rules <<EOF
SUBSYSTEM=="bcm2835-gpiomem", KERNEL=="gpiomem", GROUP="gpio",
MODE="0660" EOF
For access to i²c, PWM, and SPI, or if you are running
an older kernel which does not have the bcm2835-gpiomem module, you
will need to run your programs as root for access to /dev/mem.
As I'm running my nodeJS application in a docker image/container, I don't understand how to set group member and which member name and where to call that udev rules command.
I'm very thankful for every explanation.
The docker user (should be the logged in user, e.g. "pi") needs to be in the "gpio" group.
# see all groups the user is assigned to
groups
# if the user is not assigned to gpio, run the following:
sudo adduser $(whoami) gpio
You need to make the device /dev/gpiomem available inside the docker container.
# e.g.
docker run -d --device /dev/gpiomem <image>
I have two linux users, named as: ubuntu and my_user
Now I build a simple Docker image and also run the Docker container
In my docker-compose.yml, I volume mount some of the files from local machine to the container, which were created by 'ubuntu' user.
Now if I login by 'my_user', and access the docker container created by 'ubuntu' user using docker exec command, then I am able to access any files that are present in the container.
My requirement is to restrict the access of 'my_user', to access the content of Docker container that was created by 'ubuntu' user.
This is not possible to achieve currently. If your user can execute Docker commands, it means effectively that the user has root privileges, therefore it's impossible to prevent this user from accessing any files.
You can add "ro",means readOnly after the data volumn.Like this
HOST:CONTAINER:ro
Or you can add ReadOnly properties in your docker-compose.yml
Here is an example how to specify read-only containers in docker-compose:
#surabhi, There is only option to restrict file access by adding fields in docker-compose file.
read_only: flag to set the volume as read-only
nocopy: flag to disable copying of data from a container when a volume is created
You can find more information here
You could install and run a sshd in that container, map port 22 to an available host port and manage the user accessibility via ssh keys.
This would not allow the user to manage things via docker commands but would give that user access to that container.
when I try to pull a docker image to a machine (which I am not a sudo on), I am getting an error
failed to register layer: ApplyLayer exit status 1 stdout: stderr: Container ID 110088952 cannot be mapped to a host ID.
I found a trouble-shooting page that says this error is the usernamespace feature is turned on and it requires that the container id must be between 0 and 65536. I checked with docker info and it does appear to be on:
Security Options:
userns
My question is: how do i get around this issue? I have no idea how to make sure the "container ids are in the range 0 and 65536"... They suggest turning on namespaces on the computer I build the image on, but the command they suggest does not work on my mac:
$ sudo docker daemon --userns-remap=default
docker: 'daemon' is not a docker command.
See 'docker --help'.
Not sure if that's the right way to go but I managed to do that by changing the /etc/subgid and etc/subuid helped me - if you specify default in userns-remap it would create a user called dockremap and automatically add records to /etc/subgid and /etc/subgid.
Read more about it here but it's important that the container ID would be in the valid uid range. if not - change the range.
Maybe it is not docker daemon ... but dockerd ...? Don't forget to kill dockerd first, before you launch your commands.
I use following command to run a docker container, and map a directory from host(/root/database) to container(/tmp/install/database):
# docker run -it --name oracle_install -v /root/database:/tmp/install/database bofm/oracle12c:preinstall bash
But in container, I find I can't use ls to list contents in /tmp/install/database/ though I am root and have all privileges:
[root#77eb235aceac /]# cd /tmp/install/database/
[root#77eb235aceac database]# ls
ls: cannot open directory .: Permission denied
[root#77eb235aceac database]# id
uid=0(root) gid=0(root) groups=0(root)
[root#77eb235aceac database]# cd ..
[root#77eb235aceac install]# ls -alt
......
drwxr-xr-x. 7 root root 4096 Jul 7 2014 database
I check /root/database in host, and all things seem OK:
[root#localhost ~]# ls -lt
......
drwxr-xr-x. 7 root root 4096 Jul 7 2014 database
Why does docker container prompt "Permission denied"?
Update:
The root cause is related to SELinux. Actually, I met similar issue last year.
A permission denied within a container for a shared directory could be due to the fact that this shared directory is stored on a device. By default containers cannot access any devices. Adding the option $docker run --privileged allows the container to access all devices and performs Kernel calls. This is not considered as secure.
A cleaner way to share device is to use the option docker run --device=/dev/sdb (if /dev/sdb is the device you want to share).
From the man page:
--device=[]
Add a host device to the container (e.g. --device=/dev/sdc:/dev/xvdc:rwm)
--privileged=true|false
Give extended privileges to this container. The default is false.
By default, Docker containers are “unprivileged” (=false) and cannot, for example, run a Docker daemon inside the Docker container. This is because by default a container is not allowed to access any devices. A “privileged” container is given access to all devices.
When the operator executes docker run --privileged, Docker will enable access to all devices on the host as well as set some configuration in AppArmor to allow the container nearly all the same access to the host as processes running outside of a container on the host.
I had a similar issue when sharing an nfs mount point as a volume using docker-compose. I was able to resolve the issue with:
docker-compose up --force-recreate
Eventhough you found the issue, this may help someone else.
Another reason is a mismatch with the UID/GID. This often shows up as being able to modify a mount as root but not as the containers user
You can set the UID, so for an ubuntu container running as ubuntu you may need to append :uid=1000 (check with id -u) or set the UID locally depending on your use case.
uid=value and gid=value
Set the owner and group of the files in the filesystem (default: uid=gid=0)
There is a good blog about it here with this tmpfs example
docker run \
--rm \
--read-only \
--tmpfs=/var/run/prosody:uid=100 \
-it learning/tmpfs
http://www.dendeer.com/post/docker-tmpfs/
I got answer from a comment under: Why does docker container prompt Permission denied?
man docker-run gives the proper answer:
Labeling systems like SELinux require that proper labels are placed on volume content mounted into a container. Without a label, the security system might prevent the processes running
inside the container from using the content. By default, Docker does not change the labels set by the OS.
To change a label in the container context, you can add either of two suffixes :z or :Z to the volume mount. These suffixes tell Docker to relabel file objects on the shared volumes. The z option tells Docker that two containers share the volume content. As a result, Docker labels the content with a shared content label. Shared volume labels allow all containers to
read/write content. The Z option tells Docker to label the content with a private unshared label. Only the current container can use a private volume.
For example:
docker run -it --name oracle_install -v /root/database:/tmp/install/database:z ...
So I was trying to run a C file using Python os.system in the container but the I was getting the same error my fix was while creating the image add this line RUN chmod -R 777 app it worked for me
I'm trying to use a stack built with Docker container to run a Symfony2 application (SfDocker). The stack consists of interlinked containers where ubuntu:14.04 is a base:
mysql db
nginx
php-fpm
The recurring problem that I'm facing is managing directory permission inside the container. When I mount a vloume from the host, e.g.
volumes:
- symfony-code:/var/www/app
The mounted directories will always be owned by root or an unidentified user (only user ID visible when running ls -al) inside the container.
This, essentially, makes it impossible to access the application through the browser. Of course running chown -R root:www-data on public directories solves the problem, but as soon as I want to write to e.g. 'cache' directory as from the host (where the user is ltarasiewicz) I'd get permission denied error. On top of that, whenever an application running inside a container creates new directories (e.h. 'logs'), they again are owned byroot and later inaccessible by the browser or my desktop user.
So my question are:
How I should manage permission accross the host and container
environments (when I want to run commands on the container from both
environments) ?
Is it possible to configure Docker so that directories mounted as volumes receive specific ownership/permissions (e.g. 'root:www-data') automatically?
Am I free to create new users and user groups inside my 'nginx' container built from the Ubuntu:14.04 image ?
A few general points, apologies if I don't answer your questions directly.
Don't run as root in the container. Create a user in the Dockerfile and switch to it, either with the USER statement or in an entrypoint or command script. See the Redis official image for a good example of this. (So the answer to Q3 is yes, and do, but via a Dockerfile - don't make changes to containers by hand).
Note that the official images often do a chown on volumes in the entrypoint script to avoid this issue you describe in 2.
Consider using a data container rather than linking directly to host directories. See the official docs for more information.
Don't run commands from the host on the volumes. Just create a temporary container to do it or use docker exec (e.g. docker run -v /myvol:/myvol myimage touch /myvol/x).