Issue docker commands on Jenkins slave - linux

I have a Jenkins master running on Windows Server 2016. I need to be able to run linux containers to run some automated e2e tests. For reasons I won't get into, I cannot enable hyper-v on this machine. This is preventing me from installing lcow and docker on my Jenkins master
What I've done instead is setup a Ubuntu 18.04 VM in virtualbox and installed docker there. I've configured the VM as a Jenkins slave using ssh to login as the jenkins user. I've setup and configured everything for this user to be able to run docker commands without using sudo. If I manually ssh into the server as the jenkins user I can run docker commands without an issue. Everything works the way you would expect.
I've then setup a test build to check that everything was working correctly. The problem is that when I try to run docker commands using the Execute Shell build step I'm getting a docker: not found error. From what I can tell, the build is running as the correct user. I added who -u to the build step so I could check which user the build was running as.
Here is the output from my build:
[TEST - e2e - TEST] $ /bin/sh -xe /tmp/jenkins16952572249375249520.sh
+ who -u
jenkins pts/0 2018-08-10 16:43 . 10072 (10.0.2.2)
+ docker run hello-world
/tmp/jenkins16952572249375249520.sh: 3: /tmp/jenkins16952572249375249520.sh: docker: not found
As I mentioned, the jenkins user has been added to the docker group and Docker has been added to $PATH (/snap/bin/):
jenkins#jenkins-docker-slave:~$ which docker
/snap/bin/docker
jenkins#jenkins-docker-slave:~$ $PATH
-bash:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin: No such file or directory
jenkins#jenkins-docker-slave:~$ who -u
jenkins pts/0 2018-08-10 16:43 . 10072 (10.0.2.2)
jenkins#jenkins-docker-slave:~$ cat /etc/group | grep docker
docker:x:1001:qctesting,jenkins
As you can see by this snippet I can successfully run docker commands by logging into the server as the jenkins user:
jenkins#jenkins-docker-slave:~$ docker run hello-world
Hello from Docker!
This message shows that your installation appears to be working correctly.
To generate this message, Docker took the following steps:
1. The Docker client contacted the Docker daemon.
2. The Docker daemon pulled the "hello-world" image from the Docker Hub.
(amd64)
3. The Docker daemon created a new container from that image which runs the
executable that produces the output you are currently reading.
4. The Docker daemon streamed that output to the Docker client, which sent it
to your terminal.
To try something more ambitious, you can run an Ubuntu container with:
$ docker run -it ubuntu bash
Share images, automate workflows, and more with a free Docker ID:
https://hub.docker.com/
For more examples and ideas, visit:
https://docs.docker.com/engine/userguide/
I have also configured the path to docker in the slaves node properties as I thought it would fix my issue. As you can see I have both git and docker listed. Git commands are working just fine. It is only the docker commands that are giving me problems. I have tried both /snap/bin and /snap/bin/docker with no luck.
I am trying to build a jenkins job that will clone a git repo, spin up the containers I need using docker-compose and some build parameters I pass in at build time, and run my e2e tests against any environment (qa, staging, production, etc.). I just can't get the jenkins slave to run the docker commands. What am I missing? How can I get the slave to recognize that docker is already installed on the system and the user has the correct permissions to execute those commands.
NOTE: I am NOT trying to run docker in docker. Practically all questions/documentation I've found on running docker commands on a jenkins slave describe how to solve this issue by running the slave in a docker container and installing the docker client in the slave container. That is not what I'm trying to accomplish. I am trying to ssh from a jenkins master into a jenkins slave that already has docker installed and run docker commands on that server as the jenkins user.

I finally figured this out thanks to the answer for this question. After reading that answer I realized I had installed the wrong version of docker on Ubuntu. I removed the previous installation and installed the correct docker package using sudo curl -sSL https://get.docker.com/ | sh. I then restarted my jenkins slave and everything started working.

Related

failing to run apache-spark-py docker locally

I'm trying to start spark on my local ubuntu machine using their docker hub image. I'm using regular docker run command and its failing with some missing input. looks like it needs some extra inputs for start but I'm not finding any documentation around that. What am I missing to start it successfully in standalone mode?
docker run -it --name myspark apache/spark-py:v3.2.

Parent Docker Containers using Docker in Docker

I am working on a jenkins ssh agent for my builds
I want to have docker installed so it can run and build docker images
I currently have the following in my Dockerfile
RUN curl -fsSL get.docker.com -o /opt/get-docker.sh
RUN chmod +x /opt/get-docker.sh
RUN sh /opt/get-docker.sh
This works fine when I run docker with
docker run <image> -v /var/run/docker.sock:/var/run/docker.sock
Issue I'm having is when I run docker ps with in the container, it shows all my parent containers as well, is there a way to prevent this?
If you mount the host's /var/run/docker.sock your docker client will connect to the host's docker daemon, and so see everything that is running on the host.
To make it so your containers can run docker in a way that appears isolated from the host you should investigate Docker-in-docker.

Only some locally built Docker images fail to work on remote server (error: "No command specified")

I have a perplexing Docker problem. I am running Docker on my Mint laptop and on a Ubuntu VPS. I have been able to build images in the past locally and send them to the server and have them run there. However, for clarity, the ones that work were probably built when I was running Ubuntu locally (more on that later).
I have an example based on Alpine:
FROM alpine:3.5
# Do a system update
RUN apk update
ENTRYPOINT ["sleep", "3"]
I build like so, and send to the remote:
docker build -t alpine-sleep .
docker save alpine-sleep | gzip > alpine-sleep.tgz
rsync --progress alpine-sleep.tgz myserver.example.com:/path/to/images/
I then unpack/import on the remote, and run, thus:
docker import /path/to/images/alpine-sleep.tgz alpine-sleep
docker run -it alpine-sleep
I get this console reply:
docker: Error response from daemon: No command specified.
See 'docker run --help'.
However, if I copy the Dockerfile to the remote, then do this:
docker build -t alpine-sleep-localbuild .
docker run -it alpine-sleep-localbuild
then I get the sleep working fine.
My Docker and kernel versions locally:
jon#jvb ~/alpine_test $ uname -r
4.4.0-79-generic
jon#jvb ~/alpine_test $ docker -v
Docker version 1.12.6, build 78d1802
And remotely:
root#vps:~/alpine-sleep# uname -r
3.13.0-24-generic
root#vps:~/alpine-sleep# docker -v
Docker version 17.05.0-ce, build 89658be
I wonder, does the major difference in the kernel make a difference? I expect 3.13 to 4.4 is quite a big jump. I don't recall what version of the kernel I was using when I build things when I was running Ubuntu locally, but it would not surprise me if it is was 3.x.
The other thing that strikes me as unexpected is the high variation in Docker version numbers. How do I have version 1.x locally, and 17.x remotely? Has the project been through a version re-numbering?
Update
I've just checked the kernel version when I was running Ubuntu locally, and that was:
4.4.0-75-generic
So, this makes me think that a major kernel discrepancy could not be to blame.
The issue is that Docker won't warn you when you use the wrong combination of save/load and export/import. You save/load an image, and you export/import a tar file from a container. Since you are doing a docker save to save your image, you need to do a docker load to restore it on the other host:
docker load < /path/to/images/alpine-sleep.tgz
I have found this very old issue: https://github.com/moby/moby/issues/1826
An image imported via docker import won't know what command to run. Any image will lose all of its associated metadata on export, so the default command won't be available after importing it somewhere else.
So, run it with the entrypoint:
docker run --entrypoint sleep alpine-sleep 3

How to get docker run to take the directory from the client machine to the host container?

My goal is to compile some code using maven from my project directory on my docker client machine using docker. (mvn compile running in a docker container).
Assume my maven image is called mvn-image and my project directory is project-dir.
If I had the docker host running on the same machine as the docker client - then I could mount a volume with something similar to:
mvn -v /projects/project-dir:/workdir -i mvn-image mvn compile
But here is the tricky bit. My docker client is on a different machine to the host machine. I'm trying not to build and and push and run a image built from my project directory - I want the convenience of the docker one-liner.
I also know I can run the container - and do a cp to get the files in there. But with that I still don't get the one-liner that I would with the docker-copy mount volume.
My question is: **How to get docker run to take the directory from the client machine to the host container? **
Sending local data is not really a feature of docker run. Other commands like docker build, docker import and docker cp are able to send local data across from client to host. docker run can send stdin to the server though.
Docker build
A docker build will actually run on the remote host your client points at. The "build context" is sent across and then everything is run remotely. You can even run the maven build as part of the build:
FROM mvn-image
COPY . /workdir
RUN mvn compile
CMD ls /workdir
Then run
docker build -t my-mvn-build .
You end up with an image on the remote host with your build in it. There's no local-build/push/remote-build steps.
Docker run Stdio
docker run can do standard Unix IO, so piping something into a command running in a container works like so:
tar -cf - . | docker run -i busybox sh -c 'tar -xvf -; ls -l /'
I'm not sure how that's going to work with the mvn command that you supplied to run a container, under the normal Docker client it would be something like:
tar -cf - -C /projects/project-dir . | \
docker run mvn-image mvn compile sh -c 'tar -xvf - -C /workdir; mvn compile'
Storage Plugins
Otherwise you could mount the data on the host from the client somehow. There are NFS and sshfs storage plugins.
Rsync
Using rsync or something similar to send your project to the Docker host would be a lot more efficient over time.
Docker cp script
The docker cp is a fairly simple solution though. If you put it in a script, yet get a "one liner" to run.
#!/bin/sh
set -uex
cid=$(docker create mvn-image mvn compile)
docker cp /projects/project-dir $cid/workdir
docker start $cid
docker logs -f $cid
you'll have to use a Docker Volume Plugin to mount the shared / network folder.
https://docs.docker.com/engine/tutorials/dockervolumes/#mount-a-shared-storage-volume-as-a-data-volume

Running cron within a docker debian:jessie container

After installing cron with RUN apt-get update && apt-get install cron -y, I am unable to run it. if I try run cron I get an error saying cron is not in PATH. How do I go about using cron within my container?
Note The specific container is the official Nginx container provided by docker
Edit I am running the command through compose.
Figured it out. I was building with the docker cli - docker build . and running with docker-compose. But compose runs its own build with its own image name attached and so was using an outdated image. docker-compose build solved it

Resources