I am trying to use a remote server to run experiments on Docker. The problem is that I have scripts that actively modify configuration files within the Docker container to run experiments, which I can only do if the user/group that owns the files does it (I do not have root access on the remote server).
On my local system, the user / group permissions are my personal user when accessed without launching the docker container. And as soon as the docker container is launched, the user / groups change to alice / alice by configuration as seen in the Dockerfile. But on the remote server, it shows as root / root even after launching in the docker container. Any suggestions?
Within my build/run shell script for Docker, I have the lines:
.
.
.
CURR_UID=$(id -u)
CURR_GID=$(id -g)
RUN_OPT="-u $CURR_UID:$CURR_GID --net=host --env DISPLAY=$DISPLAY \
--volume $XAUTHORITY:/home/alice/.Xauthority \
--volume /tmp/.X11-unix:/tmp/.X11-unix \
--privileged $MOUNT_DEVEL $MOUNT_LEARN \
--shm-size $SHM_SIZE $GPU_OPT $CONT_NAME \
-it $DETACH --rm $IMAGE_NAME:latest"
docker run $RUN_OPT
.
.
.
The run option -u $CURR_UID:$CURR_GID is supposed to set the user/group permissions to whatever user/group is running it at the moment. And within my Dockerfile:
.
.
.
# Working user
RUN groupadd --gid ${GROUP_ID} alice && \
useradd -m -s /bin/bash -u ${USER_ID} -g ${GROUP_ID} alice && \
echo "alice:alice" | chpasswd && adduser alice sudo
.
.
.
I can provide more information if needed, I really just need any help at all. Been at this for days. Please advise. Thank you.
In your docker container you can set the effective user with the directive:
USER alice
It is documented here: https://docs.docker.com/engine/reference/builder/#user
Related
I'm building a network with docker compose and some bash scripts and i'm having problems during the process. Basically i have a some containers and volumes.
In one of the container i have to rename a file and copy it in a volume to make it accesible to other containers.
The problem is that this file is regenerated with a different name every time i start the network (this because it's a key) so i don't know its name.
If i try with this command in the container:
docker exec -it containerName cp /path_in_container/* /volume/key.pem
docker give me an error related to the path. The same thing happen if i use
docker exec -it containerName cp /path_in_container/. /volume/key.pem
If i insert the real name this way:
docker exec -it containerName cp /path_in_container/2164921649_sk /volume/key.pem
i have no problem but, how i already explained, i can't know its name.
I tried to solve the problem copying the file from the linked volume folder directly in my system but, since the folder is protected, i need to use:
sudo chown -R user:user /tmp/path/*
In this case, the problem is that if I enter the chown command in a bash script, I then have to enter the password and it doesn't always work.
So I would like to try to copy the file directly from the container by making a copy of all the files in the folder or make the bash script run with the chown command inside, before the various copy operations, without entering the password.
Can someone help me?
Thanks
EDIT:
This is a part of the code useful to understand the problem
#Copy TLS-CA certificate
docker exec -it tls-ca cp /tmp/hyperledger/fabric-ca/admin/msp/cacerts/tls-ca-7051.pem /certificates/tls-ca-7051.pem
echo "Start operation for ORG0"
#ENROLL ORDERER
# for identity
docker exec -it rca-org0 fabric-ca-client enroll -d -u https://orderer1-org0:ordererpw#rca-org0:7052 --tls.certfiles /tmp/hyperledger/fabric-ca/admin/msp/cacerts/rca-org0-7052.pem --home /tmp/hyperledger/fabric-ca-enrollment/orderer --mspdir msp
sleep 5
# for TLS
docker exec -it rca-org0 fabric-ca-client enroll -d -u https://orderer1-org0:ordererPW#tls-ca:7051 --enrollment.profile tls --csr.hosts orderer1-org0 --tls.certfiles /certificates/tls-ca-7051.pem --home /tmp/hyperledger/fabric-ca-enrollment/orderer --mspdir tls-msp
sleep5
#ENROLL ADMIN USER
docker exec -it rca-org0 fabric-ca-client enroll -d -u https://admin-org0:org0adminpw#rca-org0:7052 --tls.certfiles /tmp/hyperledger/fabric-ca/admin/msp/cacerts/rca-org0-7052.pem --home /tmp/hyperledger/fabric-ca-enrollment/admin/ --mspdir msp
sleep 5
#CREATE NECESSARY FOLDERS
docker exec rca-org0 cp /tmp/hyperledger/fabric-ca-enrollment/orderer/tls-mps/keystore/*
chown -R fabrizio:fabrizio /tmp/hyperledger/*
mv /tmp/hyperledger/org0/orderer/tls-msp/keystore/* /tmp/hyperledger/org0/orderer/tls-msp/keystore/key.pem
mkdir -p /tmp/hyperledger/org0/orderer/msp/admincerts
cp /tmp/hyperledger/org0/admin/msp/signcerts/cert.pem /tmp/hyperledger/org0/orderer/msp/admincerts/orderer-admin-cert.pem
mkdir /tmp/hyperledger/org0/msp
mkdir /tmp/hyperledger/org0/msp/{admincerts,cacerts,tlscacerts,users}
cp /tmp/hyperledger/org0/ca/admin/msp/cacerts/rca-org0-7052.pem /tmp/hyperledger/org0/msp/cacerts/org0-ca-cert.pem
cp /tmp/hyperledger/certificates/tls-ca-7051.pem /tmp/hyperledger/org0/msp/tlscacerts/tls-ca-cert.pem
cp /tmp/hyperledger/org0/admin/msp/signcerts/cert.pem /tmp/hyperledger/org0/msp/admincerts/admin-org0-cert.pem
cp ./org0-config.yaml /tmp/hyperledger/org0/msp/config.yaml
In the script you show, you run a series of one-off commands in an existing container, and then need to manage the container filesystem. It might be more straightforward to script a series of docker run commands, that can use docker run -v bind mounts to inject input files into the container and get the output files back out.
docker run --rm \
-v "$PWD/cacerts:/cacerts" \
-v "$PWD/certs:/certs" \
image-for-fabric-ca-client \
fabric-ca-client enroll \
-d \
-u https://orderer1-org0:ordererpw#rca-org0:7052 \
--tls.certfiles /cacerts/rca-org0-7052.pem \
--home /certs \
--mspdir msp
If this invocation has the TLS CA certificates used as input in ./cacerts, and the resulting TLS server certificates as output in ./certs, then you've "escaped" Docker space; you can use ordinary shell commands here.
mv ./certs/*_sk ./certs/key.pem
Depending on what the fabric-ca-client enroll command actually does, it might be possible to run it as the same user ID as on the host
docker run \
-u $(id -u) \
-v "$PWD/certs:/certs" \
...
So long as the host ./cacerts directory is world-readable and the ./certs directory is writable by the current user, the main container process will run as the same (numeric) user ID as on the host, and the files will be readable without chown.
In general I'd recommend avoiding docker exec and docker cp in scripts, in much the same way you don't use a debugger like gdb for routine tasks like generating CA certificates.
Also consider the possibility that you may need to run this script as root anyways. TLS private keys typically aren't readable by other than their owner (mode 0600 or 0400) and you might need to chown the files to the eventual container users, which will require root access. Also note in the last docker run invocation that nothing stops you from specifying -u root or mounting a "system" host directory -v /host-etc:/etc, so it's very easy to use docker run to root the host; on many systems access to the Docker socket will be very reasonably restricted to require sudo access.
I'm trying to run a Docker build within a Docker container based upon Ubuntu 20.04. The container needs to run as a non-root use for the build process before the Docker build occurs.
Here's some snippets of my Dockerfile to show what I'm doing:
FROM amd64/ubuntu:20.04
# Install required packages
RUN apt-get update && apt-get install -y software-properties-common
build-essential \
libssl-dev \
openssl \
libsqlite3-dev \
libtool \
wget \
autoconf \
automake \
git \
make \
pkg-config \
cmake \
doxygen \
graphviz \
docker.io
# Add user for CI purposes
RUN useradd -ms /bin/bash ciuser
RUN passwd -d ciuser
# Set docker group membership
RUN usermod -aG docker ciuser
# Run bash as the non-root user
CMD ["su", "-", "ciuser", "/bin/bash"]
When I run the container up, and try to run docker commands, I get an error:
$ docker run -ti --privileged=true -v /var/run/docker.sock:/var/run/docker.sock ci_container_staging
ciuser#0bb768506106:~$ docker ps
Got permission denied while trying to connect to the Docker daemon socket at unix:///var/run/docker.sock: Get http://%2Fvar%2Frun%2Fdocker.sock/v1.40/containers/json: dial unix /var/run/docker.sock: connect: permission denied
If I remove the running as ciuser it works ok:
$ docker run -ti --privileged=true -v /var/run/docker.sock:/var/run/docker.sock /ci_container_staging
root#d71654581cec:/# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
d71654581cec ci_container_staging "/bin/bash" 3 seconds ago Up 2 seconds vigilant_lalande
root#d71654581cec:/#
Where am I going wrong with setting up Docker via Dockerfile and then setting user to run as?
amd64/ubuntu:20.04 has a docker group with group id 103. Most likely the gid of the docker group for your local machine is not 103 (check getent group docker). So even though ciuser is part of the docker group, the id is different and so the user is not granted access to the docker socket.
A simple fix would be to change the gid of the docker group in the container to match your host's:
RUN groupmod -g <HOST_DOCKER_GROUP_ID> docker
There are plenty of other ways to solve issues with mapping uid/gid to docker containers but this should give you enough information to move forward.
Example/more info:
# gid on docker socket is 998
root#c349e1d13b76:/# ls -al /var/run/docker.sock
srw-rw---- 1 root 998 0 Apr 12 14:54 /var/run/docker.sock
# But gid of docker group is 103
root#c349e1d13b76:/# getent group docker
docker:x:103:ciuser
# root can `docker ps`
root#c349e1d13b76:/# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
c349e1d13b76 nonroot:latest "/bin/bash" About a minute ago Up About a minute kind_satoshi
# but fails for ciuser
root#c349e1d13b76:/# runuser -l ciuser -c 'docker ps'
Got permission denied while trying to connect to the Docker daemon socket at unix:///var/run/docker.sock: Get http://%2Fvar%2Frun%2Fdocker.sock/v1.24/containers/json: dial unix /var/run/docker.sock: connect: permission denied
# change docker gid in the container to match the one on the socket/localhost
# 998 is the docker gid on my machine, yours may (will) be different.
root#c349e1d13b76:/# groupmod -g 998 docker
# run `docker ps` again as ciuser, works.
root#c349e1d13b76:/# runuser -l ciuser -c 'docker ps'
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
c349e1d13b76 nonroot:latest "/bin/bash" About a minute ago Up About a minute kind_satoshi
Part of the Docker metadata when it starts a container is which user it should run as; you wouldn't generally use su or sudo.
USER ciuser
CMD ["/bin/bash"] # or the actual thing the container should do
This is important because you can override the user when the container starts up, with the docker run -u option; or you can docker run --group-add extra groups. These should typically be numeric group IDs, and they do not need to exist in the container's /etc/passwd or /etc/group files.
If the host's Docker socket is mode 0660 and owned by a docker group, you can look up the corresponding group ID and specify the container process has that group ID:
docker run \
--group-add $(getent group docker | cut -d: -f3) \
-v /var/run/docker.sock:/var/run/docker.sock \
--rm \
ci_container_staging \
docker ps
(The container does not specifically need to be --privileged, though nothing stops it from launching additional privileged containers.)
I know that one can use the --user option with Docker to run a container as a certain user, but in my case, my Docker image has a user inside it, let us call that user manager. Now is it possible to map that user to a user on host? For example, if there is a user john on the host, can we map john to manager?
Yes, you can set the user from the host, but you should modify your Dockerfile a bit to deal with run time user.
FROM alpine:latest
# Override user name at build. If build-arg is not passed, will create user named `default_user`
ARG DOCKER_USER=default_user
# Create a group and user
RUN addgroup -S $DOCKER_USER && adduser -S $DOCKER_USER -G $DOCKER_USER
# Tell docker that all future commands should run as this user
USER $DOCKER_USER
Now, build the Docker image:
docker build --build-arg DOCKER_USER=$(whoami) -t docker_user .
The new user in Docker will be the Host user.
docker run --rm docker_user ash -c "whoami"
Another way is to pass host user ID and group ID without creating the user in Dockerfile.
export UID=$(id -u)
export GID=$(id -g)
docker run -it \
--user $UID:$GID \
--workdir="/home/$USER" \
--volume="/etc/group:/etc/group:ro" \
--volume="/etc/passwd:/etc/passwd:ro" \
--volume="/etc/shadow:/etc/shadow:ro" \
alpine ash -c "whoami"
You can further read more about the user in docker here and here.
Another way is through an entrypoint.
Example
This example relies on gosu which is present in recent Debian derivatives, not yet in Alpine 3.13 (but is in edge).
You could run this image as follow:
docker run --rm -it \
--env UID=$(id -u) \
--env GID=$(id -g) \
-v "$(pwd):$(pwd)" -w "$(pwd)" \
imagename
tree
.
├── Dockerfile
└── files/
└── entrypoint
Dockerfile
FROM ...
# [...]
ARG DOCKER_USER=default_user
RUN addgroup "$DOCKER_USER" \
&& adduser "$DOCKER_USER" -G "$DOCKER_USER"
RUN wget -O- https://github.com/tianon/gosu/releases/download/1.12/gosu-amd64 |\
install /dev/stdin /usr/local/bin/gosu
COPY files /
RUN chmod 0755 /entrypoint \
&& sed "s/\$DOCKER_USER/$DOCKER_USER/g" -i /entrypoint
ENTRYPOINT ["/entrypoint"]
files/entrypoint
#!/bin/sh
set -e
set -u
: "${UID:=0}"
: "${GID:=${UID}}"
if [ "$#" = 0 ]
then set -- "$(command -v bash 2>/dev/null || command -v sh)" -l
fi
if [ "$UID" != 0 ]
then
usermod -u "$UID" "$DOCKER_USER" 2>/dev/null && {
groupmod -g "$GID" "$DOCKER_USER" 2>/dev/null ||
usermod -a -G "$GID" "$DOCKER_USER"
}
set -- gosu "${UID}:${GID}" "${#}"
fi
exec "$#"
Notes
UID is normally a read-only variable in bash, but it will work as expected if set by the docker --env flag
I choose gosu for it's simplicity, but you could make it work with su or sudo; it will need more configuration however
if you don't want to specify two --env switch, you could do something like: --env user="$(id -u):$(id -g)" and in the entrypoint: uid=${user%:*} gid=${user#*:}; note at this point the UID variable will be read-only in bash that's why I switched to lower-case... rest of the adaptation is left to the reader
There is no simple solution that handles all use cases. Solving these problems is continuous work, a part of life in the containerized world.
There is no magical parameter that you could add to a docker exec or docker run invocation and reliably cause the containerized software to no longer run into permissions issues during operations on host-mapped volumes. Unless your mapped directories are chmod-0777-and-come-what-may (DON'T), you will be running into permissions issues and you will be solving them as you go, and this is the task you should try becoming efficient at, instead of trying to find a miracle once-and-forever solution that will never exist.
I have an OpenSuse 42.3 docker image that I've configured to run a code. The image has a single user(other than root) called "myuser" that I create during the initial Image generation via the Dockerfile. I have three script files that generate a container from the image based on what operating system a user is on.
Question: Can the username "myuser" in the container be set to the username of the user that executes the container generation script?
My goal is to let a user pop into the container interactively and be able to run the code from within the container. The code is just a single binary that executes and has some IO, so I want the user's directory to be accessible from within the container so that they can navigate to a folder on their machine and run the code to generate output in their filesystem.
Below is what I have constructed so far. I tried setting the USER environment variable during the linux script's call to docker run, but that didn't change the user from "myuser" to say "bob" (the username on the host machine that started the container). The mounting of the directories seems to work fine. I'm not sure if it is even possible to achieve my goal.
Linux Container script:
username="$USER"
userID="$(id -u)"
groupID="$(id -g)"
home="${1:-$HOME}"
imageName="myImage:ImageTag"
containerName="version1Image"
docker run -it -d --name ${containerName} -u $userID:$groupID \
-e USER=${username} --workdir="/home/myuser" \
--volume="${home}:/home/myuser" ${imageName} /bin/bash \
Mac Container script:
username="$USER"
userID="$(id -u)"
groupID="$(id -g)"
home="${1:-$HOME}"
imageName="myImage:ImageTag"
containerName="version1Image"
docker run -it -d --name ${containerName} \
--workdir="/home/myuser" \
--v="${home}:/home/myuser" ${imageName} /bin/bash \
Windows Container script:
ECHO OFF
SET imageName="myImage:ImageTag"
SET containerName="version1Image"
docker run -it -d --name %containerName% --workdir="/home/myuser" -v="%USERPROFILE%:/home/myuser" %imageName% /bin/bash
echo "Container %containerName% was created."
echo "Run the ./startWindowsLociStream script to launch container"
The below code has been checked into https://github.com/bmitch3020/run-as-user.
I would handle this in an entrypoint.sh that checks the ownership of /home/myuser and updates the uid/gid of the user inside your container. It can look something like:
#!/bin/sh
set -x
# get uid/gid
USER_UID=`ls -nd /home/myuser | cut -f3 -d' '`
USER_GID=`ls -nd /home/myuser | cut -f4 -d' '`
# get the current uid/gid of myuser
CUR_UID=`getent passwd myuser | cut -f3 -d: || true`
CUR_GID=`getent group myuser | cut -f3 -d: || true`
# if they don't match, adjust
if [ ! -z "$USER_GID" -a "$USER_GID" != "$CUR_GID" ]; then
groupmod -g ${USER_GID} myuser
fi
if [ ! -z "$USER_UID" -a "$USER_UID" != "$CUR_UID" ]; then
usermod -u ${USER_UID} myuser
# fix other permissions
find / -uid ${CUR_UID} -mount -exec chown ${USER_UID}.${USER_GID} {} \;
fi
# drop access to myuser and run cmd
exec gosu myuser "$#"
And here's some lines from a relevant Dockerfile:
FROM debian:9
ARG GOSU_VERSION=1.10
# run as root, let the entrypoint drop back to myuser
USER root
# install prereq debian packages
RUN apt-get update \
&& DEBIAN_FRONTEND=noninteractive apt-get install -y --no-install-recommends \
apt-transport-https \
ca-certificates \
curl \
vim \
wget \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/*
# Install gosu
RUN dpkgArch="$(dpkg --print-architecture | awk -F- '{ print $NF }')" \
&& wget -O /usr/local/bin/gosu "https://github.com/tianon/gosu/releases/download/$GOSU_VERSION/gosu-$dpkgArch" \
&& chmod 755 /usr/local/bin/gosu \
&& gosu nobody true
RUN useradd -d /home/myuser -m myuser
WORKDIR /home/myuser
# entrypoint is used to update uid/gid and then run the users command
COPY entrypoint.sh /entrypoint.sh
ENTRYPOINT ["/entrypoint.sh"]
CMD /bin/sh
Then to run it, you just need to mount /home/myuser as a volume and it will adjust permissions in the entrypoint. e.g.:
$ docker build -t run-as-user .
$ docker run -it --rm -v $(pwd):/home/myuser run-as-user /bin/bash
Inside that container you can run id and ls -l to see that you have access to /home/myuser files.
Usernames are not important. What is important are the uid and gid values.
User myuser inside your container will have a uid of 1000 (first non-root user id). Thus when you start your container and look at the container process from the host machine, you will see that the container is owned by whatever user having a uid of 1000 on the host machine.
You can override this by specifying the user once you run your container using:
docker run --user 1001 ...
Therefore if you want the user inside the container, to be able to access files on the host machine owned by a user having a uid of 1005 say, just run the container using --user 1005.
To better understand how users map between the container and host take a look at this wonderful article. https://medium.com/#mccode/understanding-how-uid-and-gid-work-in-docker-containers-c37a01d01cf
First of all (https://docs.docker.com/engine/reference/builder/#arg):
Warning: It is not recommended to use build-time variables for passing
secrets like github keys, user credentials etc. Build-time variable
values are visible to any user of the image with the docker history
command.
But if you still need to do this, read https://docs.docker.com/engine/reference/builder/#arg:
A Dockerfile may include one or more ARG instructions. For example,
the following is a valid Dockerfile:
FROM busybox
ARG user1
ARG buildno
...
and https://docs.docker.com/engine/reference/builder/#user:
The USER instruction sets the user name (or UID) and optionally the
user group (or GID) to use when running the image and for any RUN, CMD
and ENTRYPOINT instructions that follow it in the Dockerfile.
USER <user>[:<group>] or
USER <UID>[:<GID>]
I am logged in in my PC (Fedora 24) as rperez. I have setup Docker for being able to run through this user, so I am running a container as follow:
$ docker run -d \
-it \
-e HOST_IP=192.168.1.66 \
-e PHP_ERROR_REPORTING='E_ALL & ~E_STRICT' \
-p 80:80 \
-v ~/var/www:/var/www \
--name php55-dev reypm/php55-dev
Notice the $ sign meaning I am running the command as a non root user (which uses #). The command above creates the following directory: /home/rperez/var/www but owner is set to root I believe this is because docker run as root user behind scenes.
Having this setup I am not able to create a file under ~/var/www as rperez because the owner is root so ...
What is the right way to deal with this? I have read this and this but is not so helpful.
Any help?
As discussioned here, this is an expected behavior of docker. You can create the target volume directory before running docker command or change the owner to your current user after the directory is created by docker:
chown $(whoami) -R /path/to/your/dir
I hit this same issue (also in a genomics context for the very same reason) and also found it quite unintuitive. What is the recommended way to "inherit ownership". Sorry if this described elsewhere, but I couldn't find it. Is it something like:
docker run ... -u $(id -u):$(id -g) ...