I have an Azure Container Instance that has a non root user as default. For debugging and experimentation, I'd like to exec into the container like you would with a normal docker container: docker exec -u root ..., so that I have sudo permissions in the container. As detailed in Interacting with a container in Azure Container Instances, you can run exec commands through az container exec ..., but as was mentioned in Christian's answer, https://stackoverflow.com/a/50334426/17129046, there doesn't seem to be a way to add extra parameters, not just for the program being run, but there also doesn't seem to be support for any of the additional options you'd have with docker exec, including the -u option to change the user that logs in to the container when running docker exec -u root ... '/bin/bash'.
I have tried using su in the container, but it prompts for a password, and I don't know what that password would be, since the dockerfile that created the image this ACI uses doesn't set a password as far as I know (The image is created via bentoml). The default user is called bentoml. Result from running id:
uid=1034(bentoml) gid=1034(bentoml) groups=1034(bentoml)
Is there a workaround for this? Maybe a way to ssh into the container as root?
I tried to reproduce the issue and got the below output
I have pulled the docker image from the docker hub using below command
docker pull <image_name>
While pulling the image from docker we need to give the credentials if it ask
I have run the image using below command
docker run -it <image_id> /bin/bash
Here the container is running and I am not able to use root user commands
For accessing container as root use the below command
docker run -u 0 -it image_id /bin/bash
Here I can able to install all root packages now
You can use this docker file for setting the no password then it won't ask any password
RUN apt-get update \
&& apt-get install -y sudo
RUN adduser --disabled-password --gecos '' docker
RUN adduser docker sudo
RUN echo '%sudo ALL=(ALL) NOPASSWD:ALL' >> /etc/sudoers
USER docker
Related
I am trying to deploy db2 express image to docker using non-root user.
The below code is used to start the db2engine using root user, it works fine.
FROM ibmoms/db2express-c:10.5.0.5-3.10.0
ENV LICENSE=accept \
DB2INST1_PASSWORD=password
RUN su - db2inst1 -c "db2start"
CMD ["db2start"]
The below code is used to start the db2engine from db2inst1 profile, giving below exception during image build. please help to resolve this.( I am trying to avoid su - command )
FROM ibmoms/db2express-c:10.5.0.5-3.10.0
ENV LICENSE=accept \
DB2INST1_PASSWORD=password
USER db2inst1
RUN /bin/bash -c ~db2inst1/sqllib/adm/db2start
CMD ["db2start"]
SQL1641N The db2start command failed because one or more DB2 database manager program files was prevented from executing with root privileges by file system mount settings.
Can you show us your Dockerfile please?
It's worth noting that a Dockerfile is used to build an image. You can execute commands while building, but once an image is published, running processses are not maintained in the image definition.
This is the reason that the CMD directive exists, so that you can tell the container which process to start and encapsulate.
If you're using the pre-existing db2 image from IBM on DockerHub (docker pull ibmcom/db2), then you will not need to start the process yourself.
Their quickstart guide demonstrates this with the following example command:
docker run -itd --name mydb2 --privileged=true -p 50000:50000 -e LICENSE=accept -e DB2INST1_PASSWORD=<choose an instance password> -e DBNAME=testdb -v <db storage dir>:/database ibmcom/db2
As you can see, you only specify the image, and leave the default ENTRYPOINT and CMD, resulting in the DB starting.
Their recommendation for building your own container on top of theirs (FROM) is to load all custom scripts into /var/custom, and they will be executed automatically after the main process has started.
I am trying to mount a volume into docker on a compute cluster running ubuntu 18.04. This volume is on a mounted filesystem to which my user has access, but sudo does not. I do have sudo permissions on this cluster. I use this command:
docker run -it --mount type=bind,source="$(pwd)"/logs,target=/workspace/logs tmp:latest bash
The result is this error:
docker: Error response from daemon: invalid mount config for type "bind": stat /home/logs: permission denied.
See 'docker run --help'.
Mounting the volume works fine on my local machine where both sudo and I have access to the drive I want to mount, which makes me believe that the problem is indeed that on the server sudo does not have permissions to the drive I want to mount into docker.
What I have tried:
running the post-install steps $ sudo groupadd docker && sudo usermod -aG docker $USER
running docker with sudo
running docker with --privileged
running docker with --user $(id -u):$(id -g)
setting the user inside the dockerfile with USER $(id -u):$(id -g) (plugging in the actual values)
Is there a way to mount the volume in this setup or to change the dockerfile to correctly access the drive with my personal user? Any help would be much appreciated.
On a sidenote, within docker I would only require readaccess to the volume in case that changes anything.
The container is created by the Docker daemon, which runs as root. That's why it still doesn't work even if you run the container or the docker command as your own user.
You might be able to run the daemon as your own user (rootless mode).
You could also look at changing the mount options (on the host system) so that the root user on the host does have access. How to do this depends on the type of filesystem.
Company's PC is Win10 and can't bring self's LP, and I want to develop in Linux, So I'm preparing to install a docker on Windows and run a Linux container on whcih I perform my development.
--- background ---
I've installed a Docker Desktop for Windows(19.03.8) in Win10 and pull a Ubuntu image.
I start the Ubuntu container with -v to mount my win10's host_dir to container's slave_dir.
The host_dir before mount has already been a git repo with a .git directory in host_dir.
Through ssh with root user, I edit the file in slave_dir in container and when I want to commit the changes, the following error appears:
root#5f8d7d02ee70:~/slave_dir# git status
fatal: failed to read object 36fa53e7ecb9d1daa454fc82f7bd7310afa335b7: Operation not permitted
I guess something is wrong with the git Authority between Win10 and my Linux-container
Linux-container's slave_dir:
Win10's host_dir:
And I've got a similar circumstance, in which the blogger said You should run the docker with --user, and the --user's param should be the same with you login on the host
So I tried as follows:
docker run -it --name test --user Leo -p 127.0.0.1:5001 -v host_dir:slave_dir image_name /bin/bash
Unfortunately, the slave_dir's uid and gid are still root.
With cygwin on Win10, I use id to find my login user's uid and gid,
and retry run docker with uid/gid directly.
docker run -it --name test --user 4344296:1049089 -p 127.0.0.1:5001 -v host_dir:slave_dir image_name /bin/bash
OMG, still not work! Still root! ... ...
I'm wondering whether my operation is wrong or window's Docker-Desktop-For-Windows has some tricks with Authority when mounting.
Thanks all!
It looks like a problem with Docker 2.2.0.4. A fix to this problem can be found at this link (It worked for me).
TL;DR: Remove the read-only permission from .git folder in windows.
I have Ubuntu 18.04. and after installing docker i added my user to docker group with the command
sudo usermod -aG docker ${USER}
and logged in
su - ${USER}
and if I check id, my user is added to docker group.
But when I reopen the terminal i cant do docker commands without sudo unless i explicitly do su ${USER}
also, I can't find docker group with the default user.
What am I missing here?
#larsks already replied to the main question in a comment, however I would like to elaborate on the implications of that change (adding your default user to the docker group).
Basically, the Docker daemon socket is owned by root:docker, so in order to use the Docker CLI commands, you need either to be in the docker group, or to prepend all docker commands by sudo.
As indicated in the documentation of Docker, it is risky to follow the first solution on your personal workstation, because this just amounts to providing the default user with root permissions without sudo-like password prompt protection. Indeed, users in the docker group are de facto root on the host. See for example this article and that one.
Instead, you may want to follow the second solution, which can be somewhat simplified by adding to your ~/.bashrc file an alias such as:
alias docker="sudo /usr/bin/docker"
Thus, docker run --rm -it debian will be automatically expanded to sudo /usr/bin/docker run --rm -it debian, thereby preserving sudo’s protection for your default user.
I have this image in which I mount a volume from the host
-v /Users/john/workspace:/data/workspace
Inside the container I'm using a user different than root. Now the problem is that it cannot create/modify files inside /data/workspace (permission denied). Now I solved it for now to do chmod -R 777 workspace on the host. What would be the docker way to solve this ?
This might be solved with user mapping (issue 7198), but that same thread include:
Managed to solve this using the new dockerfile args. It doesn't require doing anything special after the container is built, so I thought I'd share. (Requires Docker 1.9)
In the Dockerfile:
# Setup User to match Host User, and give superuser permissions
ARG USER_ID=0
RUN useradd code_executor -u ${USER_ID} -g sudo
RUN echo 'code_executor ALL=(ALL) NOPASSWD:ALL' >> /etc/sudoers
USER ${USER_ID}
Then to build:
docker build --build-arg USER_ID=$(id -u)
That way, the user in the container can write in the mounted host volume (no chown/chmod required)