Docker tool in Jenkins container (with mounted Docker socket) is not finding a Docker daemon to connect to - linux

I just started a Jenkins docker container with a mounted docker socket like the following:
docker run -d \
--publish 8080:8080 \
--publish 50000:50000 \
--volume /my_jenkins_home:/var/jenkins_home \
--volume /var/run/docker.sock:/var/run/docker.sock \
--restart unless-stopped \
--name my_jenkins_container \
company/my_jenkins:latest
Then I bash into the container like this:
docker exec -it my_jenkins_container bash
A tool 'docker' command in a Jenkins pipeline script has automatically installed a Docker binary at the following path: /var/jenkins_home/tools/org.jenkinsci.plugins.docker.commons.tools.DockerTool/docker/bin/docker
However, when I try to run Docker commands from that Docker binary (assuming that it will connect with the Docker socket that has been mounted at /var/run/docker.sock) it returns the following error:
$ /var/jenkins_home/tools/org.jenkinsci.plugins.docker.commons.tools.DockerTool/docker/bin/docker images
Cannot connect to the Docker daemon. Is the docker daemon running on this host?
How can I ensure that this Docker binary (the binary that has been automatically installed via the Jenkins' tool 'docker' command) runs its Docker commands by connecting to the mounted Docker socket at /var/run/docker.sock?

Short Answer:
The file permissions of the mounted Docker socket file had to be revised.
Long Answer:
When I simply tried to execute /path/to/dockerTool/bin/docker ps -a on the Docker container, it was producing an error.
$ docker exec -it my_jenkins_container bash -c "/var/jenkins_home/tools/org.jenkinsci.plugins.docker.commons.tools.DockerTool/docker/bin/docker ps -a"
Cannot connect to the Docker daemon. Is the docker daemon running on this host?
Then, when I tried to execute /path/to/dockerTool/bin/docker ps -a with user=root, it worked fine.
$ docker exec -it --user=root my_jenkins_container bash -c "/var/jenkins_home/tools/org.jenkinsci.plugins.docker.commons.tools.DockerTool/docker/bin/docker ps -a"
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
c9dd56411efe company/my_jenkins:latest "/bin/tini -- /usr/lo" 49 seconds ago Up 49 seconds 0.0.0.0:8080->8080/tcp, 0.0.0.0:50000->50000/tcp my_jenkins_container
So it means I just needed to set the right permissions to the Docker socket. All I had to do was chgrp the socket file to the jenkins group so that the jenkins group/users can read/write to that socket file (the before & after of the chgrp command is included here):
$ docker exec -it my_jenkins_container bash -c "ls -l /var/run/docker.sock"
srw-rw---- 1 root 999 0 Jan 15 08:29 /var/run/docker.sock
$ docker exec -it --user=root my_jenkins_container bash -c "chgrp jenkins /var/run/docker.sock"
$ docker exec -it my_jenkins_container bash -c "ls -l /var/run/docker.sock"
srw-rw---- 1 root jenkins 0 Jan 15 08:29 /var/run/docker.sock
After that, executing /path/to/dockerTool/bin/docker ps -a as a non-root user worked fine
$ docker exec -it my_jenkins_container bash -c "/var/jenkins_home/tools/org.jenkinsci.plugins.docker.commons.tools.DockerTool/docker/bin/docker ps -a"
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
c9dd56411efe company/my_jenkins:latest "/bin/tini -- /usr/lo" 3 minutes ago Up 3 minutes 0.0.0.0:8080->8080/tcp, 0.0.0.0:50000->50000/tcp my_jenkins_container

Related

understanding docker run --attach option

I'm a newbie with Docker and I'm pretty stuck at the how the --attach option works with docker run.
I would say that I've somehow understood the following command, as far as I understood with the -it Docker creates a pseudo-tty where the /bin/bash command is executed and the stdin and stdout of my local terminal is linked to the pseudo-tty.
$ docker run --rm -it ubuntu /bin/bash
root#d5e3551114ca:/# ls
bin boot dev etc home lib lib32 lib64 libx32 media mnt opt proc root run sbin srv sys tmp usr var
What I do not understand is the meaning of the following commands:
In this case I see no output on my local terminal, but in the docker logs I can see that the keystrokes are intercepted and executed
docker run --rm --attach stdin -i ubuntu /bin/bash
Here the container is started and stopped immediatelly
docker run --rm --attach stdin ubuntu /bin/bash
Here the container is started but keystrokes are not intercepted nor the output is shown
docker run --rm --attach stdin -t ubuntu /bin/bash
Here I can see the output but keystrokes are not intercepted
$ docker run --rm --attach stdout -t ubuntu /bin/bash
root#b47a46abdf34:/# ls

Running non-root Docker within Ubuntu Docker container

I'm trying to run a Docker build within a Docker container based upon Ubuntu 20.04. The container needs to run as a non-root use for the build process before the Docker build occurs.
Here's some snippets of my Dockerfile to show what I'm doing:
FROM amd64/ubuntu:20.04
# Install required packages
RUN apt-get update && apt-get install -y software-properties-common
build-essential \
libssl-dev \
openssl \
libsqlite3-dev \
libtool \
wget \
autoconf \
automake \
git \
make \
pkg-config \
cmake \
doxygen \
graphviz \
docker.io
# Add user for CI purposes
RUN useradd -ms /bin/bash ciuser
RUN passwd -d ciuser
# Set docker group membership
RUN usermod -aG docker ciuser
# Run bash as the non-root user
CMD ["su", "-", "ciuser", "/bin/bash"]
When I run the container up, and try to run docker commands, I get an error:
$ docker run -ti --privileged=true -v /var/run/docker.sock:/var/run/docker.sock ci_container_staging
ciuser#0bb768506106:~$ docker ps
Got permission denied while trying to connect to the Docker daemon socket at unix:///var/run/docker.sock: Get http://%2Fvar%2Frun%2Fdocker.sock/v1.40/containers/json: dial unix /var/run/docker.sock: connect: permission denied
If I remove the running as ciuser it works ok:
$ docker run -ti --privileged=true -v /var/run/docker.sock:/var/run/docker.sock /ci_container_staging
root#d71654581cec:/# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
d71654581cec ci_container_staging "/bin/bash" 3 seconds ago Up 2 seconds vigilant_lalande
root#d71654581cec:/#
Where am I going wrong with setting up Docker via Dockerfile and then setting user to run as?
amd64/ubuntu:20.04 has a docker group with group id 103. Most likely the gid of the docker group for your local machine is not 103 (check getent group docker). So even though ciuser is part of the docker group, the id is different and so the user is not granted access to the docker socket.
A simple fix would be to change the gid of the docker group in the container to match your host's:
RUN groupmod -g <HOST_DOCKER_GROUP_ID> docker
There are plenty of other ways to solve issues with mapping uid/gid to docker containers but this should give you enough information to move forward.
Example/more info:
# gid on docker socket is 998
root#c349e1d13b76:/# ls -al /var/run/docker.sock
srw-rw---- 1 root 998 0 Apr 12 14:54 /var/run/docker.sock
# But gid of docker group is 103
root#c349e1d13b76:/# getent group docker
docker:x:103:ciuser
# root can `docker ps`
root#c349e1d13b76:/# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
c349e1d13b76 nonroot:latest "/bin/bash" About a minute ago Up About a minute kind_satoshi
# but fails for ciuser
root#c349e1d13b76:/# runuser -l ciuser -c 'docker ps'
Got permission denied while trying to connect to the Docker daemon socket at unix:///var/run/docker.sock: Get http://%2Fvar%2Frun%2Fdocker.sock/v1.24/containers/json: dial unix /var/run/docker.sock: connect: permission denied
# change docker gid in the container to match the one on the socket/localhost
# 998 is the docker gid on my machine, yours may (will) be different.
root#c349e1d13b76:/# groupmod -g 998 docker
# run `docker ps` again as ciuser, works.
root#c349e1d13b76:/# runuser -l ciuser -c 'docker ps'
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
c349e1d13b76 nonroot:latest "/bin/bash" About a minute ago Up About a minute kind_satoshi
Part of the Docker metadata when it starts a container is which user it should run as; you wouldn't generally use su or sudo.
USER ciuser
CMD ["/bin/bash"] # or the actual thing the container should do
This is important because you can override the user when the container starts up, with the docker run -u option; or you can docker run --group-add extra groups. These should typically be numeric group IDs, and they do not need to exist in the container's /etc/passwd or /etc/group files.
If the host's Docker socket is mode 0660 and owned by a docker group, you can look up the corresponding group ID and specify the container process has that group ID:
docker run \
--group-add $(getent group docker | cut -d: -f3) \
-v /var/run/docker.sock:/var/run/docker.sock \
--rm \
ci_container_staging \
docker ps
(The container does not specifically need to be --privileged, though nothing stops it from launching additional privileged containers.)

How to enter a pod as root?

Currently I enter the pod as a mysql user using the command:
kubectl exec -it PODNAME -n NAMESPACE bash
I want to enter a container as root.
I've tried the following command:
kubectl exec -it PODNAME -n NAMESPACE -u root ID /bin/bash
kubectl exec -it PODNAME -n NAMESPACE -u root ID bash
There must be a way.
:-)
I found the answer.
You cannot log into the pod directly as root via kubectl.
You can do via the following steps.
1) find out what node it is running on kubectl get po -n [NAMESPACE] -o wide
2) ssh node
3) find the docker container sudo docker ps | grep [namespace]
4) log into container as root sudo docker exec -it -u root [DOCKER ID] /bin/bash
Actually there is already a possibility to connect via kubectl addon kubectl-plugins. Found a solution replying onto related question.
git clone https://github.com/jordanwilson230/kubectl-plugins.git
cd kubectl-plugins
./install-plugins.sh
source ~/.bash_profile
kubectl ssh -u root suse
Connecting...
Pod: suse
Namespace: NONE
User: root
Container: NONE
Command: /bin/sh
If you don't see a command prompt, try pressing enter.
sh-5.0#
SSH as root to kubernates pod.
For those on Windows Platform using minikube.
First you to ssh inside minikube
minikube ssh --user root
Then you need to find desired docker container
docker ps | grep NAME_POD
Copy fully qualified docker container name then use docker exec:
sudo docker exec -it -u root FQDN_CONTAINER bash
In my case it was :
sudo docker exec -it -u root k8s_jupyter_my-jupyter-0_default
_f05e2913-f1fd-4084-a8e8-e783519d4a71_0 bash
Once then i had full root access in bash inside POD.

How to retain docker alpine container after "exit" is used?

Like for example if I use command docker run -it alpine /bin/sh
it starts a terminal after which I can install packages and all. Now when I use exit command it goes back to the terminal. (main one)
So how can I access the same container again?
When I run that command again, I get a fresh alpine.
Please help
The container lives as long as the specified run command process is still running. When you specify to run /bin/sh, once you exit, the sh process will die and so will you container.
If you want to keep your container running, you have to keep the process inside running. For your case (I am not sure what you want to acheive, I assume you are just testing), the following will keep it running
docker run -d --name alpine alpine tail -f /dev/null
Then you can sh into the container using
docker exec -it alpine sh
Pull an image
docker image pull alpine
See that image is there
docker image ls OR just docker images
see what is inside the alpine
docker run alpine ls -al
Now your question is how to stay with the shell
docker container run -it alpine /bin/sh
You are inside shell script command line. Some distribution may have bash shell.
docker exec -it 5f4 sh
/ # (<-- you can run linux command here!)
At this point, you can use command line of alpine and do
ls -al
type exit to come out-
You can run it in detached mode and it will keep running.
With exec command we can login again
docker container run -it -d alpine /bin/sh
verify that it is UP and copy the FIRST 2 -3 digits of the container ID
docker container ls
login with exec command
docker exec -it <CONTAINER ID or just 2-3 digits> sh
You will need to STOP otherwise it will keep running.
docker stop <CONTAINER ID>
Run Alpine in background
$ docker run --name alpy -dit alpine
$ docker ps
Attach to Alpine
$ docker attach alpy
You should use docker start, which allows you to start a stopped container. If you didn't name your container, you'll need to get it's name/id using docker ps.
For example,
$docker ps
CONTAINER ID IMAGE COMMAND
4c01db0b339c alpine bash
$docker start -i -a 4c01db0b339c
What you should do is below
docker run -d --name myalpine alpine tail -f /dev/null
This would make sure that your container doesn't die. Now whenever you need to install packages inside you just get inside the container using sh
docker exec -it myalpine /bin/sh
If for some reason your container dies, you can still start it again using
docker start myalpine

Docker container does not give me a shell

I am trying to get a shell inside the Docker container moul/phoronix-test-suite on Docker Hub using this command
docker run -t -i moul/phoronix-test-suite /bin/bash
but just after executing the command (binary file), the container stops and I get no shell into it.
[slazer#localhost ~]$ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
0993189463e6 moul/phoronix-test-suite "phoronix-test-suite " 7 seconds ago Exited (0) 3 seconds ago kickass_shockley
It is a ubuntu:trusty container. How can I get a shell into it, so that I can send arguments to the command phoronix-test-suite?
docker run -t -i moul/phoronix-test-suite /bin/bash will not give you a bash (contrary to docker run -it fedora bash)
According to its Dockerfile, what it will do is execute
phoronix-test-suite /bin/bash
Meaning, it will pass /bin/bash as parameter to phoronix-test-suite, which will exit immediately. That leaves you no time to execute a docker exec -it <container> bash in order to open a bash in an active container session.
Have you tried restarting your docker? It might need to restart or even reboot the host.

Resources