Docker overlay2: error walking file system: OSError [Errno 40] Too many levels of symbolic links - linux

Main app: uvicorn server on starlette (python) webapp
While I was trying to debug the error in the title (troubleshoot log is following below) running the below command at the host's FS (/var/lib/docker/overlay2/[IMAGE_HASH_FOLDER]
find -L ./ -mindepth 15
I find the files involved in the loop.
Locally is the /usr/bin/X11 and at server I'm getting the following:
error walking file system: OSError [Errno 40] Too many levels of symbolic links: '/dev/fd/6/dev/fd/6/dev/fd/6/dev/fd/6/dev/fd/6/dev/fd/6/dev/fd/6/dev/fd/6/dev/fd/6/dev/fd/6/dev/fd/6/dev/fd/6/dev/fd/6/dev/stderr'
The owner of the conflicting files (FS is the one of the host), pending docker service restart after the pruning:
➜ overlay2 find -L ./ -mindepth 15
find: File system loop detected; ‘./d2dba43e7cdbdec81bac529bb85908a7f859f227cda0149389f164272cb372e8/diff/usr/bin/X11’ is part of the same file system loop as ‘./d2dba43e7cdbdec81bac529bb85908a7f859f227cda0149389f164272cb372e8/diff/usr/bin’.
find: File system loop detected; ‘./6ec18b03535c1dac329e05b2abdc68fb0eea742a06878d90f84c4de73ea6a4a9/merged/usr/bin/X11’ is part of the same file system loop as ‘./6ec18b03535c1dac329e05b2abdc68fb0eea742a06878d90f84c4de73ea6a4a9/merged/usr/bin’.
find: File system loop detected; ‘./l/GCDLBXTJXAL5PFTI4BE3MM3OE2/usr/bin/X11’ is part of the same file system loop as ‘./l/GCDLBXTJXAL5PFTI4BE3MM3OE2/usr/bin’.
➜ overlay2 ls -l ./d2dba43e7cdbdec81bac529bb85908a7f859f227cda0149389f164272cb372e8/diff/usr/bin/X11
lrwxrwxrwx 1 root root 1 May 3 2017 ./d2dba43e7cdbdec81bac529bb85908a7f859f227cda0149389f164272cb372e8/diff/usr/bin/X11 -> .
The dockerfile:
FROM python:3.8
COPY src/ ./
RUN /usr/local/bin/python -m pip install --upgrade pip || true
RUN pip install -r requirements.txt || true
ARG POSTGRES_USER
ENV POSTGRES_USER=$POSTGRES_USER
ARG POSTGRES_PASSWORD
ENV POSTGRES_PASSWORD=$POSTGRES_PASSWORD
ARG POSTGRES_SERVER
ENV POSTGRES_SERVER=$POSTGRES_SERVER
ARG POSTGRES_DB
ENV POSTGRES_DB=$POSTGRES_DB
ARG POSTGRES_PORT
ENV POSTGRES_PORT=$POSTGRES_PORT
ARG SESSION_SECRET
ENV SESSION_SECRET=$SESSION_SECRET
ARG DO_YOU_WANT_USERS
ENV DO_YOU_WANT_USERS=$DO_YOU_WANT_USERS
ARG WHERE_AM_I
ENV WHERE_AM_I=$WHERE_AM_I
# SSL
ARG FORWARDED_ALLOW_IPS
ENV FORWARDED_ALLOW_IPS=$FORWARDED_ALLOW_IPS
ARG SSL_CERTIFICATE
ENV SSL_CERTIFICATE=$SSL_CERTIFICATE
ARG SSL_KEYFILE
ENV SSL_KEYFILE=$SSL_KEYFILE
ARG UPLOADS_PATH
ENV UPLOADS_PATH=$UPLOADS_PATH
RUN echo "FINAL STAGE - RUN APP"
EXPOSE 7000
CMD ["python", "run.py"]
Either I run the container with the volume I usually bind:
UPLOADS_PATH=/var/opt/tmp
LOCAL_UPLOADS_PATH=/var/containers/TEST_UPLOADS
docker build --build-arg POSTGRES_USER --build-arg POSTGRES_PASSWORD --build-arg POSTGRES_SERVER --build-arg POSTGRES_DB --build-arg POSTGRES_PORT --build-arg UPLOADS_PATH --build-arg WHERE_AM_I --build-arg SESSION_SECRET --build-arg DO_YOU_WANT_USERS -t test .
docker run -d --name test_container -v ${LOCAL_UPLOADS_PATH}:${UPLOADS_PATH} -p 7000:7000 test
or without the binding, I still get the same error logs & the app is constantly restarting after every request.
How is possible to have such a loop (linked files?) inside the image?
UPDATE
The container was running smoothly until I've changed bcrypt library with pybcrypt and uvicorn with its cythonized version.
Much appreciate any suggestions on what to further explore.
P.S. I've also tried the docker system prune -a, and although there were deprecated stuff, nothing changed.
P. S. 2: #jordanvrtanoski I've separated the question as you've suggested.
UPDATE #2
Following #jordanvrtanoski inspect command:
➜ docker image inspect -f $'{{.RepoTags}}\t{{.GraphDriver.Data.LowerDir}}' $(docker images -q)
[test:latest] /var/lib/docker/overlay2/99e3b5db623ae543d045cc86c2d7d36400c8d1780ec4b86c297f5055bbdfe81a/diff:/var/lib/docker/overlay2/4ed6de1627ba5957c8fa9834c797a60d277c76e61f138d1b6909c55ef5475523/diff:/var/lib/docker/overlay2/7f790257bc4e6ed9e6ea6ef5bed0eb0cf3af213ea913484a40946a45639d8188/diff:/var/lib/docker/overlay2/c8e04185bdc7714e116615a3599a9832ebe2080b43f09b68331cca5d7c109371/diff:/var/lib/docker/overlay2/9ef94affd46bbcc11d62999ab0c59d6bf28cc6d51f13a7513b93bb209738940a/diff:/var/lib/docker/overlay2/62438cdccba1f312f34e8458e4ec695019e6af65107b2e16c3d7eaa53ca03c06/diff:/var/lib/docker/overlay2/9ec57b8b2680944690cdceae73c1c49b31716bd5efbed78bd3d54810bffdc7b6/diff:/var/lib/docker/overlay2/b2c4ce8d2b6764476a452489f58e615fcce939eaecb3d65466f81f5f115a5b5d/diff:/var/lib/docker/overlay2/f8609908601489fb7e3e28a32c423ee556ec041c69ba274a02de316ccbef5c48/diff:/var/lib/docker/overlay2/dcd13187b642277de35f299c1abb1d7d9695972e8b8893267a62f65338679080/diff:/var/lib/docker/overlay2/e2ed1696e3a34e69ed493da3a2c10b942f09384b1cebac54afebea6fef9c4521/diff
[python:3.8] /var/lib/docker/overlay2/c8e04185bdc7714e116615a3599a9832ebe2080b43f09b68331cca5d7c109371/diff:/var/lib/docker/overlay2/9ef94affd46bbcc11d62999ab0c59d6bf28cc6d51f13a7513b93bb209738940a/diff:/var/lib/docker/overlay2/62438cdccba1f312f34e8458e4ec695019e6af65107b2e16c3d7eaa53ca03c06/diff:/var/lib/docker/overlay2/9ec57b8b2680944690cdceae73c1c49b31716bd5efbed78bd3d54810bffdc7b6/diff:/var/lib/docker/overlay2/b2c4ce8d2b6764476a452489f58e615fcce939eaecb3d65466f81f5f115a5b5d/diff:/var/lib/docker/overlay2/f8609908601489fb7e3e28a32c423ee556ec041c69ba274a02de316ccbef5c48/diff:/var/lib/docker/overlay2/dcd13187b642277de35f299c1abb1d7d9695972e8b8893267a62f65338679080/diff:/var/lib/docker/overlay2/e2ed1696e3a34e69ed493da3a2c10b942f09384b1cebac54afebea6fef9c4521/diff
UPDATE #3
So after following both #jordanvrtanoski advices & this post (#Janith Shanilka): Docker overlay2 eating Disk Space
I was missing the following file:
nano /etc/docker/daemon.json
and populated with:
{
"storage-driver": "aufs"
}
then sudo systemctl restart docker
Now the app doesn't crash, but I'm still getting at logs the same loop message:
error walking file system: OSError [Errno 40] Too many levels of symbolic links: '/usr/bin/X11/X11/X11/X11/X11/X11/X11/X11/X11/X11/X11/X11/X11/X11/X11/X11/X11/X11/X11/X11/X11/X11/X11/X11/X11/X11/X11/X11/X11/X11/X11/X11/X11/X11/X11/X11/X11/cc'
also #jordanvrtanoski
➜ docker image inspect -f $'{{.RepoTags}}\t{{.GraphDriver.Data.LowerDir}}' $(docker images -q)
[test:latest] <no value>
[python:3.8] <no value>
I've also noticed that df is a little weird, it looks like the docker's volume is like a 'clone' of the basic host's filesystem?
➜ df
Filesystem 1K-blocks Used Available Use% Mounted on
udev 4046520 0 4046520 0% /dev
tmpfs 815676 3276 812400 1% /run
/dev/sda3 49014600 20123088 26798560 43% /
tmpfs 4078368 304 4078064 1% /dev/shm
tmpfs 5120 0 5120 0% /run/lock
tmpfs 4078368 0 4078368 0% /sys/fs/cgroup
/dev/sda1 474730 148714 296986 34% /boot
tmpfs 815672 0 815672 0% /run/user/0
none 49014600 20123088 26798560 43% /var/lib/docker/aufs/mnt/0d98503bd3ea82e353f6776c2d813a642536ad6dd4300299a8fc22b5d6348bc8
UPDATE #4
So after #jordanvrtanoski 's suggestion I returned docker to overlay, from 'aufs'.
The below results are from the host:
➜ cd /var/lib/docker/overlay2
➜ find -L ./ -mindepth 15
find: File system loop detected; ‘./2ecf467259235c8c4605b058bff4f80100790ee7f5010d4954d6aab1a7f28686/merged/usr/bin/X11’ is part of the same file system loop as ‘./2ecf467259235c8c4605b058bff4f80100790ee7f5010d4954d6aab1a7f28686/merged/usr/bin’.
find: File system loop detected; ‘./6f39e8e2089c99f636da9a534e2ccbe7e41202eeb2ce645efa9387dd0ef0b908/diff/usr/bin/X11’ is part of the same file system loop as ‘./6f39e8e2089c99f636da9a534e2ccbe7e41202eeb2ce645efa9387dd0ef0b908/diff/usr/bin’.
find: File system loop detected; ‘./l/5AOADDMRCAKLG2FQDDJEYC6CY2/usr/bin/X11’ is part of the same file system loop as ‘./l/5AOADDMRCAKLG2FQDDJEYC6CY2/usr/bin’.

UPDATE #5
Found the cause: lib uvicorn[standard] is the cythonized version of itself. Once I removed it all errors were gone. So I'll move this to uvicorn's github.
#jordanvrtanoski Thank you once again for your help!

This problem is caused by an self-referencing symbolic linl in the pyhton:3.8 image.
~# docker run -ti --rm python:3.8 bash
root#ef6c6f4e18ff:/# ls -l /usr/bin/X11/X11
lrwxrwxrwx 1 root root 1 May 3 2017 /usr/bin/X11/X11 -> .
The fix the circular reference is caused by python:3.8 image you can simply delete the /usr/bin/X11/X11 symbolic link
root#ef6c6f4e18ff:/# rm /usr/bin/X11/X11
You can add this to your build file as follwos:
FROM python:3.8
COPY src/ ./
RUN rm /usr/bin/X11/X11
RUN /usr/local/bin/python -m pip install --upgrade pip || true
RUN pip install -r requirements.txt || true
ARG POSTGRES_USER
ENV POSTGRES_USER=$POSTGRES_USER
ARG POSTGRES_PASSWORD
ENV POSTGRES_PASSWORD=$POSTGRES_PASSWORD
ARG POSTGRES_SERVER
ENV POSTGRES_SERVER=$POSTGRES_SERVER
ARG POSTGRES_DB
ENV POSTGRES_DB=$POSTGRES_DB
ARG POSTGRES_PORT
ENV POSTGRES_PORT=$POSTGRES_PORT
ARG SESSION_SECRET
ENV SESSION_SECRET=$SESSION_SECRET
ARG DO_YOU_WANT_USERS
ENV DO_YOU_WANT_USERS=$DO_YOU_WANT_USERS
ARG WHERE_AM_I
ENV WHERE_AM_I=$WHERE_AM_I
# SSL
ARG FORWARDED_ALLOW_IPS
ENV FORWARDED_ALLOW_IPS=$FORWARDED_ALLOW_IPS
ARG SSL_CERTIFICATE
ENV SSL_CERTIFICATE=$SSL_CERTIFICATE
ARG SSL_KEYFILE
ENV SSL_KEYFILE=$SSL_KEYFILE
ARG UPLOADS_PATH
ENV UPLOADS_PATH=$UPLOADS_PATH
RUN echo "FINAL STAGE - RUN APP"
EXPOSE 7000
CMD ["python", "run.py"]

In case someone else stumbles across this later - it looks like uvicorn accesses all subfolders of the path you invoke it from. If you're not explicitly setting a working directory in your dockerfile/compose.yaml, this will be the file system root, which gets into all the bind mount infrastructure in proc that you probably don't care about for running an ASGI server.
WORKDIR /home in a dockerfile or working_dir: /home in a compose.yaml should generally be a fine workaround for most Docker use cases for this error, or to your app directory if you're volume mounting in code.

Related

Docker compose linux volume - permission acess denied

Below is the Dockerfile for linux images. I get file path access denied error in Ubuntu VM
environment:
- ASPNETCORE_ENVIRONMENT=Production
- ASPNETCORE_Kestrel__Certificates__Default__Password=password123
- ASPNETCORE_Kestrel__Certificates__Default__Path=/abc/https/localhost.pfx
volumes:
- ./devops/https/localhost.pfx:/abc/https/localhost.pfx:ro
The path in the image and dockerfile was the same at that time. Attempted to run on ubuntu. ubuntu user is added to docker group.
Docker file content is provided for reference.
FROM mcr.microsoft.com/dotnet/aspnet:6.0
COPY . App/
WORKDIR /App
ENV ASPNETCORE_ENVIRONMENT="Production"
ENV ASPNETCORE_URLS="https://+:44123;"
ENV ABC_WORKDIR /App
ENV ABC_FILE_STORE /abc/source
EXPOSE 44123
RUN mkdir -p $ABC_FILE_STORE
RUN mkdir -p /abc/https
RUN chown abcuser /abc/https
RUN chown abcuser $ABC_FILE_STORE
RUN chown abcuser /App
USER abcuser
VOLUME /abc/https
VOLUME $ABC_FILE_STORE
WORKDIR $ABC_FILE_STORE
# sanity check: try to write a file
RUN echo "Hello from ABC" > hello.txt
WORKDIR $ABC_WORKDIR
ENTRYPOINT ["dotnet", "ABCService.dll"]
Should all the volume paths used by app, need to be mentioned and change the permission in Dockerfile
Should a docker file need to create a user and use it in dockerfile.
Above seems, both windows and linux need separate dockerfile for image creation

docker failed to compute cache key: /path not found: not found

I just started with the docker and I am creating docker image from my code. Here is the dir structure
project
/deployment
/Dockerfile.project1
/services
/ui
/project1
and Here is the code in Dockerfile.project1
FROM node:14
# arguments
ARG BUILD_COMMIT
ARG BUILD_BRANCH
ARG BUILD_TAG
# port number for app
ARG PORT=3000
ARG APP=adam_website_ui
LABEL build.tag=${BUILD_TAG}
LABEL app=${APP}
# set the env
ENV BUILD_BRANCH=${BUILD_BRANCH}
ENV BUILD_COMMIT=${BUILD_COMMIT}
WORKDIR /app
# Assiging user
USER root
RUN echo "$(date '+%Y-%m-%d %H:%M:%S'): ======> Setup Appusr" \
&& groupadd -g 1001 appusr \
&& useradd -r -u 1001 -g appusr appusr \
&& mkdir /home/appusr/ \
&& chown -R appusr:appusr /home/appusr/\
&& chown -R appusr:appusr /app
# copy the relavant code
COPY ../services/ui/project1 /app/
# installing deps
RUN npm install
RUN npm run build
RUN SET PORT=${PORT} && npm start
USER appusr:appusr
but this is showing
=> ERROR [4/7] COPY ../services/ui/project1 /app/ 0.0s
------
> [4/7] COPY ../services/ui/project1 /app/:
------
failed to compute cache key: "/services/ui/project1" not found: not found
and I am building using this command from deployment folder
docker build -t website_ui -f Dockerfile.project1 .
what can be the issue?
If you run docker build within the directory project/deployment with build context ., then docker is not able to find the files in project/services.
Try to run docker build -t website_ui -f deployment/Dockerfile.project1 . (the last argument is the build context)
From the docs:
The docker build command builds Docker images from a Dockerfile and a "context". A build's context is the set of files located in the specified PATH or URL.
When you build a Docker image you must specify a path to a directory which will be the build context, this is the dot at the end of your docker build command.
The content of this directory will then be copied by Docker (probably into an internal Docker directory), which is why you can't COPY paths outside the context.
You should see a message like "Uploading context of X bytes" when running docker build.
Change your COPY instruction to COPY services/ui/project1 /app/ and build your image from the project's root directory:
docker build -t website_ui -f deployment/Dockerfile.project1 .
Read more about the build context on the docker build documentation

Permission denied when writing logs in Docker

I was trying to write a Docker log file on Ubuntu 20.04 by
sudo docker logs CONTAINER_ID >output.log
But it returned
-bash: output.log: Permission denied
How to solve the permission problem to save the logs? Is the problem inside the container or outside of it?
P.S. I had this container by docker run -d -v ~/desktop/usercode/Docker:/code -p 5000:5000 flask_app:1.0, and the Dockerfile is as below:
## Base Python Image for App
FROM python:3.9-rc-buster
# Setting up Docker environment
# Setting Work directory for RUN CMD commands
WORKDIR /code
# Export env variables.
ENV FLASK_APP app.py
ENV FLASK_RUN_HOST 0.0.0.0
###
#Copy requirements file from current directory to file in
#containers code directory we have just created.
COPY requirements.txt requirements.txt
#Run and install all required modules in container
RUN pip3 install -r requirements.txt
#Copy current directory files to containers code directory
COPY . .
#RUN app.
CMD ["flask", "run"]
And, the images are:
REPOSITORY TAG IMAGE ID CREATED SIZE
flask_app 1.0 90b2840f4d5d 29 minutes ago 895MB
python 3.9-rc-buster 50625b35cf42 9 months ago 884MB
The command you entered first creates output.log file in the same direction as you are, then drops the logs in that file; It seems that the problem is solved if you use the following command.
docker logs CONTAINER_ID > ~/output.log
This command creates a log file in the root path of the user you are. for example if your username is USER1 that file create at /home/USER1

How do I point a docker image to my .m2 directory for running maven in docker on a mac?

When you look at the Dockerfile for a maven build it contains the line:
VOLUME /root/.m2
Now this would be great if this is where my .m2 repository was on my mac - but it isn't - it's in
/Users/myname/.m2
Now I could do:
But then the linux implementation in Docker wouldn't know to look there. I want to map the linux location to the mac location, and have that as part of my vagrant init. Kind of like:
ln /root/.m2 /Users/myname/.m2
My question is: How do I point a docker image to my .m2 directory for running maven in docker on a mac?
How do I point a docker image to my .m2 directory for running maven in docker on a mac?
You rather point a host folder (like /Users/myname/.m2) to a container folder (not an image)
See "Mount a host directory as a data volume":
In addition to creating a volume using the -v flag you can also mount a directory from your Docker daemon’s host into a container.
$ docker run -d -P --name web -v /Users/myname/.m2:/root/.m2 training/webapp python app.py
This command mounts the host directory, /Users/myname/.m2, into the container at /root/.m2.
If the path /root/.m2 already exists inside the container’s image, the /Users/myname/.m2 mount overlays but does not remove the pre-existing content.
Once the mount is removed, the content is accessible again.
This is consistent with the expected behavior of the mount command.
To share the .m2 folder in build step you can overwrite the localRepository value in settings.xml.
Here is the Dockerfile snippet I used to share my local .m2 repository in docker.
FROM maven:3.5-jdk-8 as BUILD
RUN echo \
"<settings xmlns='http://maven.apache.org/SETTINGS/1.0.0\' \
xmlns:xsi='http://www.w3.org/2001/XMLSchema-instance' \
xsi:schemaLocation='http://maven.apache.org/SETTINGS/1.0.0 https://maven.apache.org/xsd/settings-1.0.0.xsd'> \
<localRepository>/root/Users/myname/.m2/repository</localRepository> \
<interactiveMode>true</interactiveMode> \
<usePluginRegistry>false</usePluginRegistry> \
<offline>false</offline> \
</settings>" \
> /usr/share/maven/conf/settings.xml;
COPY . /usr/src/app
RUN mvn --batch-mode -f /usr/src/app/pom.xml clean package
FROM openjdk:8-jre
EXPOSE 8080 5005
COPY --from=BUILD /usr/src/app/target /opt/target
WORKDIR /opt/target
ENV _JAVA_OPTIONS '-agentlib:jdwp=transport=dt_socket,server=y,suspend=n,address=5005'
ENV swarm.http.port 8080
CMD ["java", "-jar", "app-swarm.jar"]
Here are the Dockerfiles and docker-compose for example project containing one spring service and any other services;
Spring-service dockerfile
FROM maven:3.5-jdk-8-alpine
WORKDIR /app
COPY . src
CMD cd src ; mvn spring-boot:run
docker-compose.yml
version: '3'
services:
account-service:
build:
context: ./
dockerfile: Dockerfile
ports:
- "8080:8080"
volumes:
- "${HOME}/.m2:/root/.m2"
Here in docker-compose we make volumes for our local .m2 repo and container one.

Docker in Docker cannot mount volume

I am running a Jenkins cluster where in the Master and Slave, both are running as a Docker containers.
The Host is latest boot2docker VM running on MacOS.
To allow Jenkins to be able to perform deployment using Docker, I have mounted the docker.sock and docker client from the host to the Jenkins container like this :-
docker run -v /var/run/docker.sock:/var/run/docker.sock -v $(which docker):/usr/bin/docker -v $HOST_JENKINS_DATA_DIRECTORY/jenkins_data:/var/jenkins_home -v $HOST_SSH_KEYS_DIRECTORY/.ssh/:/var/jenkins_home/.ssh/ -p 8080:8080 jenkins
I am facing issues while mounting a volume to Docker containers that are run inside the Jenkins container. For example, if I need to run another Container inside the Jenkins container, I do the following :-
sudo docker run -v $JENKINS_CONTAINER/deploy.json:/root/deploy.json $CONTAINER_REPO/$CONTAINER_IMAGE
The above runs the container, but the file "deploy.json" is NOT mounted as a file, but instead as a "Directory". Even if I mount a Directory as a Volume, I am unable to view the files in the resulting container.
Is this a problem, because of file permissions due to Docker in Docker case?
A Docker container in a Docker container uses the parent HOST's Docker daemon and hence, any volumes that are mounted in the "docker-in-docker" case is still referenced from the HOST, and not from the Container.
Therefore, the actual path mounted from the Jenkins container "does not exist" in the HOST. Due to this, a new directory is created in the "docker-in-docker" container that is empty. Same thing applies when a directory is mounted to a new Docker container inside a Container.
Very basic and obvious thing which I missed, but realized as soon I typed the question.
Lots of good info in these posts but I find none of them are very clear about which container they are referring to. So let's label the 3 environments:
host: H
docker container running on H: D
docker container running in D: D2
We all know how to mount a folder from H into D: start D with
docker run ... -v <path-on-H>:<path-on-D> -v /var/run/docker.sock:/var/run/docker.sock ...
The challenge is: you want path-on-H to be available in D2 as path-on-D2.
But we all got bitten when trying to mount the same path-on-H into D2, because we started D2 with
docker run ... -v <path-on-D>:<path-on-D2> ...
When you share the docker socket on H with D, then running docker commands in D is essentially running them on H. Indeed if you start D2 like this, all works (quite unexpectedly at first, but makes sense when you think about it):
docker run ... -v <path-on-H>:<path-on-D2> ...
The next tricky bit is that for many of us, path-on-H will change depending on who runs it. There are many ways to pass data into D so it knows what to use for path-on-H, but probably the easiest is an environment variable. To make the purpose of such var clearer, I start its name with DIND_. Then from H start D like this:
docker run ... -v <path-on-H>:<path-on-D> --env DIND_USER_HOME=$HOME \
--env DIND_SOMETHING=blabla -v /var/run/docker.sock:/var/run/docker.sock ...
and from D start D2 like this:
docker run ... -v $DIND_USER_HOME:<path-on-D2> ...
Another way to go about this is to use either named volumes or data volume containers. This way, the container inside doesn't have to know anything about the host and both Jenkins container and the build container reference the data volume the same way.
I have tried doing something similar to what you are doing, except with an agent rather that using the Jenkins master. The problem was the same in that I couldn't mount the Jenkins workspace in the inner container. What worked for me was using the data volume container approach and the workspace files were visible to both the agent container and the inner container. What I liked about the approach is the both containers reference the data volume in the same way. Mounting directories with an inner container would be tricky as the inner container now needs to know something about the host that its parent container is running on.
I have detailed blog post about my approach here:
http://damnhandy.com/2016/03/06/creating-containerized-build-environments-with-the-jenkins-pipeline-plugin-and-docker-well-almost/
As well as code here:
https://github.com/damnhandy/jenkins-pipeline-docker
In my specific case, not everything is working the way I'd like it to in terms of the Jenkins Pipeline plugin. But it does address the issue of the inner container being able to access the Jenkins workspace directory.
Regarding your use case related to Jenkins, you can simply fake the path by creating a symlink on the host:
ln -s $HOST_JENKINS_DATA_DIRECTORY/jenkins_data /var/jenkins_home
If you are like me and don't want to mess with Jenkins Setup or too lazy to go through all this trouble, here is a simple workaround I did to get this working for me.
Step 1 - Add following variables to the environment section of pipeline
environment {
ABSOLUTE_WORKSPACE = "/home/ubuntu/volumes/jenkins-data/workspace"
JOB_WORKSPACE = "\${PWD##*/}"
}
Step 2 - Run you container with following command Jenkins pipeline as follows.
steps {
sh "docker run -v ${ABSOLUTE_WORKSPACE}/${JOB_WORKSPACE}/my/dir/to/mount:/targetPath imageName:tag"
}
Take note of the double quotes in the above statement, Jenkins will not convert the env variables if the quotes are not formatted properly or single quotes are added instead.
What does each variable signify?
ABSOLUTE_WORKSPACE is the path of our Jenkins volume which we had mounted while starting Jenkins Docker Container. In my case, the docker run command was as follows.
sudo docker run \
-p 80:8080 \
-v /home/ubuntu/volumes/jenkins-data:/var/jenkins_home \
-v /var/run/docker.sock:/var/run/docker.sock \
-d -t jenkinsci/blueocean
Thus the varible ABSOLUTE_WORKSPACE=/home/ubuntu/volumes/jenkins-data + /workspace
JOB_WORKSPACE command gives us the current workspace directory where your code's lives. This is also the root dir of your code base. Just followed this answer for reference.
How does this work ?
It is very straight forward, as mentioned in #ZephyrPLUSPLUS ( credits where due ) answer, the source path for our docker container which is being run in Jenkins pipeline is not the path in current container, rather the path taken is host's path. All we are doing here is constructing the path where our Jenkins pipeline is being run. And mounting it to our container. Voila!!
Here's a little illustration to help clarify ...
This also works via docker-compose and/or named volumes so you don't need to create a data only container, but you still need to have the empty directory on the host.
Host setup
Make host side directories and set permissions to allow Docker containers to access
sudo mkdir -p /var/jenkins_home/{workspace,builds,jobs} && sudo chown -R 1000 /var/jenkins_home && sudo chmod -R a+rwx /var/jenkins_home
docker-compose.yml
version: '3.1'
services:
jenkins:
build: .
image: jenkins
ports:
- 8080:8080
- 50000:50000
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- workspace:/var/jenkins_home/workspace/
# Can also do builds/jobs/etc here and below
jenkins-lts:
build:
context: .
args:
versiontag: lts
image: jenkins:lts
ports:
- 8081:8080
- 50001:50000
volumes:
workspace:
driver: local
driver_opts:
type: none
o: bind
device: /var/jenkins_home/workspace/
When you docker-compose up --build jenkins (you may want to incorporate this into a ready to run example like https://github.com/thbkrkr/jks where the .groovy scripts pre-configure Jenkins to be useful on startup) and then you will be able to have your jobs clone into the $JENKINS_HOME/workspace directory and shouldn't get errors about missing files/etc because the host and container paths will match, and then running further containers from within the Docker-in-Docker should work as well.
Dockerfile (for Jenkins with Docker in Docker)
ARG versiontag=latest
FROM jenkins/jenkins:${versiontag}
ENV JAVA_OPTS="-Djenkins.install.runSetupWizard=false"
COPY jenkins_config/config.xml /usr/share/jenkins/ref/config.xml.override
COPY plugins.txt /usr/share/jenkins/ref/plugins.txt
RUN /usr/local/bin/install-plugins.sh < /usr/share/jenkins/ref/plugins.txt
USER root
RUN curl -L http://get.docker.io | bash && \
usermod -aG docker jenkins
# Since the above takes a while make any other root changes below this line
# eg `RUN apt update && apt install -y curl`
# drop back to the regular jenkins user - good practice
USER jenkins
EXPOSE 8080
A way to work around this issue is to mount a directory (inside your docker container in which you mounted your docker socket) using the exact same path for its destination. Then, when you run a container from within that container, you are able to mount anything within that mount's path into the new container using docker -v.
Take this example:
# Spin up your container from which you will use docker
docker run -v /some/dir:/some/dir -v /var/run/docker.sock:/var/run.docker.sock docker:latest
# Now spin up a container from within this container
docker run -v /some/dir:/usr/src/app $CONTAINER_IMAGE
The folder /some/dir is now mounted across your host, the intermediate container as well as your destination container. Since the mount's path exists on both the host as the "nearly docker-in-docker" container, you can use docker -v as expected.
It's kind of similar to the suggestion of creating a symlink on the host but I found this (at least in my case), a cleaner solution. Just don't forget to cleanup the dir on the host afterwards! ;)
I have same problem in Gitlab CI, I solved this by using docker cp to do something like mount
script:
- docker run --name ${CONTAINER_NAME} ${API_TEST_IMAGE_NAME}
after_script:
- docker cp ${CONTAINER_NAME}:/code/newman ./
- docker rm ${CONTAINER_NAME}
Based from the description mentioned by #ZephyrPLUSPLUS
here is how I managed to solve this:
vagrant#vagrant:~$ hostname
vagrant
vagrant#vagrant:~$ ls -l /home/vagrant/dir-new/
total 4
-rw-rw-r-- 1 vagrant vagrant 10 Jun 19 11:24 file-new
vagrant#vagrant:~$ cat /home/vagrant/dir-new/file-new
something
vagrant#vagrant:~$ docker run --rm -it -v /var/run/docker.sock:/var/run/docker.sock docker /bin/sh
/ # hostname
3947b1f93e61
/ # ls -l /home/vagrant/dir-new/
ls: /home/vagrant/dir-new/: No such file or directory
/ # docker run -it --rm -v /home/vagrant/dir-new:/magic ubuntu /bin/bash
root#3644bfdac636:/# ls -l /magic
total 4
-rw-rw-r-- 1 1000 1000 10 Jun 19 11:24 file-new
root#3644bfdac636:/# cat /magic/file-new
something
root#3644bfdac636:/# exit
/ # hostname
3947b1f93e61
/ # vagrant#vagrant:~$ hostname
vagrant
vagrant#vagrant:~$
So docker is installed on a Vagrant machine. Lets call it vagrant. The directory you want to mount is in /home/vagrant/dir-new in vagrant.
It starts a container, with host 3947b1f93e61. Notice that /home/vagrant/dir-new/ is not mounted for 3947b1f93e61.
Next we use the exact location from vagrant, which is /home/vagrant/dir-new as the source of the mount and specify any mount target we want, in this case it is /magic. Also note that /home/vagrant/dir-new does not exist in 3947b1f93e61.
This starts another container, 3644bfdac636.
Now the contents from /home/vagrant/dir-new in vagrant can be accessed from 3644bfdac636.
I think because docker-in-docker is not a child, but a sibling. and the path you specify must be the parent path and not the sibling's path. So any mount would still refer to the path from vagrant, no matter how deep you do docker-in-docker.
You can solve this passing in an environment variable.
Example:
.
├── docker-compose.yml
└── my-volume-dir
└── test.txt
In docker-compose.yml
version: "3.3"
services:
test:
image: "ubuntu:20.04"
volumes:
- ${REPO_ROOT-.}/my-volume-dir:/my-volume
entrypoint: ls /my-volume
To test run
docker run -e REPO_ROOT=${PWD} \
-v /var/run/docker.sock:/var/run/docker.sock \
-v ${PWD}:/my-repo \
-w /my-repo \
docker/compose \
docker-compose up test
You should see in the output:
test_1 | test.txt

Resources