I am logged in in my PC (Fedora 24) as rperez. I have setup Docker for being able to run through this user, so I am running a container as follow:
$ docker run -d \
-it \
-e HOST_IP=192.168.1.66 \
-e PHP_ERROR_REPORTING='E_ALL & ~E_STRICT' \
-p 80:80 \
-v ~/var/www:/var/www \
--name php55-dev reypm/php55-dev
Notice the $ sign meaning I am running the command as a non root user (which uses #). The command above creates the following directory: /home/rperez/var/www but owner is set to root I believe this is because docker run as root user behind scenes.
Having this setup I am not able to create a file under ~/var/www as rperez because the owner is root so ...
What is the right way to deal with this? I have read this and this but is not so helpful.
Any help?
As discussioned here, this is an expected behavior of docker. You can create the target volume directory before running docker command or change the owner to your current user after the directory is created by docker:
chown $(whoami) -R /path/to/your/dir
I hit this same issue (also in a genomics context for the very same reason) and also found it quite unintuitive. What is the recommended way to "inherit ownership". Sorry if this described elsewhere, but I couldn't find it. Is it something like:
docker run ... -u $(id -u):$(id -g) ...
Related
I am trying to use a remote server to run experiments on Docker. The problem is that I have scripts that actively modify configuration files within the Docker container to run experiments, which I can only do if the user/group that owns the files does it (I do not have root access on the remote server).
On my local system, the user / group permissions are my personal user when accessed without launching the docker container. And as soon as the docker container is launched, the user / groups change to alice / alice by configuration as seen in the Dockerfile. But on the remote server, it shows as root / root even after launching in the docker container. Any suggestions?
Within my build/run shell script for Docker, I have the lines:
.
.
.
CURR_UID=$(id -u)
CURR_GID=$(id -g)
RUN_OPT="-u $CURR_UID:$CURR_GID --net=host --env DISPLAY=$DISPLAY \
--volume $XAUTHORITY:/home/alice/.Xauthority \
--volume /tmp/.X11-unix:/tmp/.X11-unix \
--privileged $MOUNT_DEVEL $MOUNT_LEARN \
--shm-size $SHM_SIZE $GPU_OPT $CONT_NAME \
-it $DETACH --rm $IMAGE_NAME:latest"
docker run $RUN_OPT
.
.
.
The run option -u $CURR_UID:$CURR_GID is supposed to set the user/group permissions to whatever user/group is running it at the moment. And within my Dockerfile:
.
.
.
# Working user
RUN groupadd --gid ${GROUP_ID} alice && \
useradd -m -s /bin/bash -u ${USER_ID} -g ${GROUP_ID} alice && \
echo "alice:alice" | chpasswd && adduser alice sudo
.
.
.
I can provide more information if needed, I really just need any help at all. Been at this for days. Please advise. Thank you.
In your docker container you can set the effective user with the directive:
USER alice
It is documented here: https://docs.docker.com/engine/reference/builder/#user
I'm building a network with docker compose and some bash scripts and i'm having problems during the process. Basically i have a some containers and volumes.
In one of the container i have to rename a file and copy it in a volume to make it accesible to other containers.
The problem is that this file is regenerated with a different name every time i start the network (this because it's a key) so i don't know its name.
If i try with this command in the container:
docker exec -it containerName cp /path_in_container/* /volume/key.pem
docker give me an error related to the path. The same thing happen if i use
docker exec -it containerName cp /path_in_container/. /volume/key.pem
If i insert the real name this way:
docker exec -it containerName cp /path_in_container/2164921649_sk /volume/key.pem
i have no problem but, how i already explained, i can't know its name.
I tried to solve the problem copying the file from the linked volume folder directly in my system but, since the folder is protected, i need to use:
sudo chown -R user:user /tmp/path/*
In this case, the problem is that if I enter the chown command in a bash script, I then have to enter the password and it doesn't always work.
So I would like to try to copy the file directly from the container by making a copy of all the files in the folder or make the bash script run with the chown command inside, before the various copy operations, without entering the password.
Can someone help me?
Thanks
EDIT:
This is a part of the code useful to understand the problem
#Copy TLS-CA certificate
docker exec -it tls-ca cp /tmp/hyperledger/fabric-ca/admin/msp/cacerts/tls-ca-7051.pem /certificates/tls-ca-7051.pem
echo "Start operation for ORG0"
#ENROLL ORDERER
# for identity
docker exec -it rca-org0 fabric-ca-client enroll -d -u https://orderer1-org0:ordererpw#rca-org0:7052 --tls.certfiles /tmp/hyperledger/fabric-ca/admin/msp/cacerts/rca-org0-7052.pem --home /tmp/hyperledger/fabric-ca-enrollment/orderer --mspdir msp
sleep 5
# for TLS
docker exec -it rca-org0 fabric-ca-client enroll -d -u https://orderer1-org0:ordererPW#tls-ca:7051 --enrollment.profile tls --csr.hosts orderer1-org0 --tls.certfiles /certificates/tls-ca-7051.pem --home /tmp/hyperledger/fabric-ca-enrollment/orderer --mspdir tls-msp
sleep5
#ENROLL ADMIN USER
docker exec -it rca-org0 fabric-ca-client enroll -d -u https://admin-org0:org0adminpw#rca-org0:7052 --tls.certfiles /tmp/hyperledger/fabric-ca/admin/msp/cacerts/rca-org0-7052.pem --home /tmp/hyperledger/fabric-ca-enrollment/admin/ --mspdir msp
sleep 5
#CREATE NECESSARY FOLDERS
docker exec rca-org0 cp /tmp/hyperledger/fabric-ca-enrollment/orderer/tls-mps/keystore/*
chown -R fabrizio:fabrizio /tmp/hyperledger/*
mv /tmp/hyperledger/org0/orderer/tls-msp/keystore/* /tmp/hyperledger/org0/orderer/tls-msp/keystore/key.pem
mkdir -p /tmp/hyperledger/org0/orderer/msp/admincerts
cp /tmp/hyperledger/org0/admin/msp/signcerts/cert.pem /tmp/hyperledger/org0/orderer/msp/admincerts/orderer-admin-cert.pem
mkdir /tmp/hyperledger/org0/msp
mkdir /tmp/hyperledger/org0/msp/{admincerts,cacerts,tlscacerts,users}
cp /tmp/hyperledger/org0/ca/admin/msp/cacerts/rca-org0-7052.pem /tmp/hyperledger/org0/msp/cacerts/org0-ca-cert.pem
cp /tmp/hyperledger/certificates/tls-ca-7051.pem /tmp/hyperledger/org0/msp/tlscacerts/tls-ca-cert.pem
cp /tmp/hyperledger/org0/admin/msp/signcerts/cert.pem /tmp/hyperledger/org0/msp/admincerts/admin-org0-cert.pem
cp ./org0-config.yaml /tmp/hyperledger/org0/msp/config.yaml
In the script you show, you run a series of one-off commands in an existing container, and then need to manage the container filesystem. It might be more straightforward to script a series of docker run commands, that can use docker run -v bind mounts to inject input files into the container and get the output files back out.
docker run --rm \
-v "$PWD/cacerts:/cacerts" \
-v "$PWD/certs:/certs" \
image-for-fabric-ca-client \
fabric-ca-client enroll \
-d \
-u https://orderer1-org0:ordererpw#rca-org0:7052 \
--tls.certfiles /cacerts/rca-org0-7052.pem \
--home /certs \
--mspdir msp
If this invocation has the TLS CA certificates used as input in ./cacerts, and the resulting TLS server certificates as output in ./certs, then you've "escaped" Docker space; you can use ordinary shell commands here.
mv ./certs/*_sk ./certs/key.pem
Depending on what the fabric-ca-client enroll command actually does, it might be possible to run it as the same user ID as on the host
docker run \
-u $(id -u) \
-v "$PWD/certs:/certs" \
...
So long as the host ./cacerts directory is world-readable and the ./certs directory is writable by the current user, the main container process will run as the same (numeric) user ID as on the host, and the files will be readable without chown.
In general I'd recommend avoiding docker exec and docker cp in scripts, in much the same way you don't use a debugger like gdb for routine tasks like generating CA certificates.
Also consider the possibility that you may need to run this script as root anyways. TLS private keys typically aren't readable by other than their owner (mode 0600 or 0400) and you might need to chown the files to the eventual container users, which will require root access. Also note in the last docker run invocation that nothing stops you from specifying -u root or mounting a "system" host directory -v /host-etc:/etc, so it's very easy to use docker run to root the host; on many systems access to the Docker socket will be very reasonably restricted to require sudo access.
I would like to know why --group-add docker in the following doesn't work.But I have another priority image that it does work.
docker run \
--rm \
-it \
-v /var/run/docker.sock:/var/run/docker.sock \
-v /etc/docker/daemon.json:/etc/docker/daemon.json \
-v /etc/shadow:/etc/shadow \
-v /etc/passwd:/etc/passwd \
-v /etc/sudoers:/etc/sudoers \
-v /etc/group:/etc/group \
-u $(id -u):$(id -g) \
--group-add docker \
docker/compose:debian-1.27.4 \
bash
it errors out
docker: Error response from daemon: Unable to find group docker.
I have the same issue with other images like ubuntu, hello-world and so on. What is needed in the image to be able to add docker group?
My system:
Ubuntu 18.04
docker 19.03.13
I know I have the docker group in the host. I can see it in the output of groups.
With you command, you are trying to add current user to the group docker, but inside docker, not in the host.
As group docker does not exist inside docker, reason why you've got error message.
If it's really what you want, you can use groupadd docker to create the group, then add the user to the group.
Unfortunately, docker tries to set the groups (as well as uid/gid) before it mounts volumes, so the group docker (which is defined in /etc/group on your host) is not visible at that point inside the container since it is not it the /etc/group file contained in the image.
[Edit] In modern docker (e.g. 20.10.23) using numerical gid for docker group will do the trick. For example if docker gid is 999 use --group-add 999 instead of --group-add docker.
One possible solution could be to create your own image and put the appropriate /etc/group (and possibly passwd) file inside it. It's not that convenient as manipulating the compose file only but certainly would do the trick.
Another alternative could be to use setpriv as container entry point. Since entry point is executed after the volumes are mounted it can access the content of mounted /etc/group file. Other similar tools like gosu or su-exec do not support auxiliary groups ARAIK, but you may find some security-related information in their documentation that you may need to take into consideration, depending on your use cases.
I have a problem with creating new files in mounted docker volume.
Firstly after installation docker i added my user to docker group.
sudo usermod -aG docker $USER
Created as my $USER folder:
mkdir -p /srv/redis
And starting container:
docker run -d -v /srv/redis:/data --name myredis redis
when i want to create file in /srv/redis as a user which created container I have a problem with access.
mkdir /srv/redis/redisTest
mkdir: cannot create directory ‘/srv/redis/redisTest’: Permission denied
I tried to search in other threads but i didn't find appropriate solution.
The question title does not reflect the real problem in my opinion.
mkdir /srv/redis/redisTest
mkdir: cannot create directory ‘/srv/redis/redisTest’: Permission denied
This problem occurs very likely because when you run:
docker run -d -v /srv/redis:/data --name myredis redis
the directory /srv/redis ownership changes to root. You can check that by
ls -lah /srv/redis
This is normal consequence of mounting external directory to docker. To regain access you have to run
sudo chown -R $USER /srv/redis
I think /srv/redis/redisTest directory is created by user inside redis container, so it belong to redis container user.
Have you already check using ls -l to see that /srv/redis/redisTest directory belong to $USER?
This could also be related (as I just found out) to having SELinux activated. This answer on the DevOps Stack Exchange worked for me:
The solution is to simply append a :z to the [docker] run volume argument so that this:
docker run -v /host/foobar:/src_dir /bin/bash
becomes this:
docker run -it -v /host/foobar:/src_dir:z /bin/bash
I am using the docker-solr image with docker, and I need to mount a directory inside it which I achieve using the -v flag.
The problem is that the container needs to write to the directory that I have mounted into it, but doesn't appear to have the permissions to do so unless I do chmod 777 on the entire directory. I don't think setting the permission to allows all users to read and write to it is the solution, but just a temporary workaround.
Can anyone guide me in finding a more canonical solution?
Edit: I've been running docker without sudo because I added myself to the docker group. I just found that the problem is solved if I run docker with sudo, but I am curious if there are any other solutions.
More recently, after looking through some official docker repositories I've realized the more idiomatic way to solve these permission problems is using something called gosu in tandem with an entry point script. For example if we take an existing docker project, for example solr, the same one I was having trouble with earlier.
The dockerfile on Github very effectively builds the entire project, but does nothing to account for the permission problems.
So to overcome this, first I added the gosu setup to the dockerfile (if you implement this notice the version 1.4 is hardcoded. You can check for the latest releases here).
# grab gosu for easy step-down from root
RUN mkdir -p /home/solr \
&& gpg --keyserver pool.sks-keyservers.net --recv-keys B42F6819007F00F88E364FD4036A9C25BF357DD4 \
&& curl -o /usr/local/bin/gosu -SL "https://github.com/tianon/gosu/releases/download/1.4/gosu-$(dpkg --print-architecture)" \
&& curl -o /usr/local/bin/gosu.asc -SL "https://github.com/tianon/gosu/releases/download/1.4/gosu-$(dpkg --print-architecture).asc" \
&& gpg --verify /usr/local/bin/gosu.asc \
&& rm /usr/local/bin/gosu.asc \
&& chmod +x /usr/local/bin/gosu
Now we can use gosu, which is basically the exact same as su or sudo, but works much more nicely with docker. From the description for gosu:
This is a simple tool grown out of the simple fact that su and sudo have very strange and often annoying TTY and signal-forwarding behavior.
Now the other changes I made to the dockerfile were these adding these lines:
COPY solr_entrypoint.sh /sbin/entrypoint.sh
RUN chmod 755 /sbin/entrypoint.sh
ENTRYPOINT ["/sbin/entrypoint.sh"]
just to add my entrypoint file to the docker container.
and removing the line:
USER $SOLR_USER
So that by default you are the root user. (which is why we have gosu to step-down from root).
Now as for my own entrypoint file, I don't think it's written perfectly, but it did the job.
#!/bin/bash
set -e
export PS1="\w:\u docker-solr-> "
# step down from root when just running the default start command
case "$1" in
start)
chown -R solr /opt/solr/server/solr
exec gosu solr /opt/solr/bin/solr -f
;;
*)
exec $#
;;
esac
A docker run command takes the form:
docker run <flags> <image-name> <passed in arguments>
Basically the entrypoint says if I want to run solr as per usual we pass the argument start to the end of the command like this:
docker run <flags> <image-name> start
and otherwise run the commands you pass as root.
The start option first gives the solr user ownership of the directories and then runs the default command. This solves the ownership problem because unlike the dockerfile setup, which is a one time thing, the entry point runs every single time.
So now if I mount directories using the -d flag, before the entrypoint actually runs solr, it will chown the files inside of the docker container for you.
As for what this does to your files outside the container I've had mixed results because docker acts a little weird on OSX. For me, it didn't change the files outside of the container, but on another OS where docker plays more nicely with the filesystem, it might change your files outside, but I guess that's what you'll have to deal with if you want to mount files inside the container instead of just copying them in.