Dockerfile VOLUME not working while -v works - linux

When I pass volume like -v /dir:/dir it works like it should
But when I use VOLUME in my dockerfile it gets mountend empty
My Dockerfile looks like this
FROM ubuntu:16.04
RUN apt-get update
RUN apt-get install nano
ENV Editor="/usr/bin/nano"
ARG UID=1000
RUN useradd -u "$UID" -G root writer
RUN mkdir -p "/home/writer" && chown -R "$UID":1000 "/home/writer"
RUN mkdir -p "/home/stepik"
RUN chown -R "$UID":1000 "/home/stepik"
VOLUME ["/home/stepik"]
USER writer
WORKDIR /home/stepik
ENTRYPOINT ["bash"]

Defining the volume on the Dockerfile only tells docker that the volume needs to exist inside the container, not where to get the volume from. It's the same as passing the option -v /dir instead of -v /dir:/dir. The result is an "anonymous" volume with a guid you can see in docker volume ls. You can't pass the option inside the Dockerfile to identify where to mount the volume from, images you pull from the docker hub can't mount an arbitrary directory from your host and send the contents of that directory to a black hat machine on the internet by design.
Note that I don't recommend defining volumes inside the Dockerfile. See my blog post on the topic for more details.

Related

Docker: mount image's original /docker-entrypoint.sh to a volume in read/write mode

I try to mount image's original /docker-entrypoint.sh to a volume in read/write mode, in order to be able to change it easily from outside (without entering the container) and then restart the container to observe the changes.
I do it (in ansible) like this:
/app/docker-entrypoint.sh:/docker-entrypoint.sh:rw
If /app/docker-entrypoint.sh doesn't exist on the host, a directory /app/docker-entrypoint.sh (not a file, as wish) is created, and I get following error:
Error starting container e40a90eef1525f554e6078e20b3ab5d1c4b27ad2a7d73aa3bf4f7c6aa337be4f: 400 Client Error: Bad Request (\"OCI runtime create failed: container_linux.go:348: starting container process caused \"process_linux.go:402: container init caused \\\"rootfs_linux.go:58: mounting \\\\\\\"/app/docker-entrypoint.sh\\\\\\\" to rootfs \\\\\\\"/var/lib/docker/devicemapper/mnt/4d3e4f4082ca621ab5f3a4ec3f810b967634b1703fd71ec00b4860c59494659a/rootfs\\\\\\\" at \\\\\\\"/var/lib/docker/devicemapper/mnt/4d3e4f4082ca621ab5f3a4ec3f810b967634b1703fd71ec00b4860c59494659a/rootfs/docker-entrypoint.sh\\\\\\\" caused \\\\\\\"not a directory\\\\\\\"\\\"\": unknown: Are you trying to mount a directory onto a file (or vice-versa)? Check if the specified host path exists and is the expected type
If I touch /app/docker-entrypoint.sh (and set proper permissions) before launching the container - the container fails to start up and keeps restarting (I assume because the /app/docker-entrypoint.sh and therefore internal /docker-entrypoint.sh are empty).
How can I mount the original content of container's /docker-entrypoint.sh to the outside?
If you want to override docker-entry point it should be executable or in other words you have to set chmod +x your_mount_entrypoint.sh in the host then you can mount otherwise it will through permission error. As entrypoint script should be executable.
Second thing, As mentioned in the comment you can mount the file better to keep the entrypoint script in directory like docker-entrypoint/entrypoint.sh.
or if you want to mount specific file then both name should be same otherwise entrypoint script will not be overridden.
docker run --name test -v $PWD/entrypoint.sh:/docker-entrypoint/entrypoint.sh --rm my_image
or
docker run --name test -v $PWD/entrypoint.sh:/docker-entrypoint/entrypoint.sh:rw --rm my_image
See this example, entrypoint generated inside dockerfile and you can overide this from any script but it should be executable and should be mount to docker-entrypoint.
Dockerfile
FROM alpine
RUN mkdir -p /docker-entrypoint
RUN echo -e $'#!/bin/sh \n\
echo "hello from docker generated entrypoint" >> /test.txt \n\
tail -f /test.txt ' >> /docker-entrypoint/entrypoint.sh
RUN chmod +x /docker-entrypoint/entrypoint.sh
ENTRYPOINT ["/docker-entrypoint/entrypoint.sh"]
if you build and run it you will
docker build -t my_image .
docker run -t --rm my_image
#output
hello from docker generated entrypoint
Now if you want to overide
Create and set permission
host_path/entrypoint/entrypoint.sh
for example entrypoint.sh
#!/bin/sh
echo "hello from entrypoint using mounted"
Now run
docker run --name test -v $PWD/:/docker-entrypoint/ --rm my_image
#output
hello from entrypoint using mounted
Update:
If you mount directory of the host it will hide the content of docker image.
The workaround
Mount some directory other then entrypoint name it backup
add instruction in entrypoint to copy entrypoint to that location at run time
So it will create new file on the host directory instead
FROM alpine
RUN mkdir -p /docker-entrypoint
RUN touch /docker-entrypoint/entrypoint.sh
RUN echo -e $'#!/bin/sh \n\
echo "hello from entrypoint" \n\
cp /docker-entrypoint/entrypoint.sh /for_hostsystem/ ' >> /docker-entrypoint/entrypoint.sh
RUN chmod +x /docker-entrypoint/entrypoint.sh
ENTRYPOINT ["/docker-entrypoint/entrypoint.sh"]
Now if you run you will have the docker entrypoint in the host, as opposit as you want
docker run --name test -v $PWD/:/for_hostsystem/ --rm my_image

Starting a Docker container with a user different from the one on the host

I am trying to deploy an image on a Ubuntu server. The problem is I would like the container to have a user other than root. In other words, I would like to start the container under that user.
What I have tried.
I have successfully created a user in my container which has an image.
I tried to start the container with the docker start command which was unsuccessful.
I tried to create a new container with a user defined inside the dockerfile, it was also unsuccessful.
root#juju_dev_server:/home/dev# sudo docker run -it --user dev d08d53c4d78b
docker: Error response from daemon: linux spec user: unable to find user dev: no matching entries in passwd file
.
Here is my dockerfile
FROM debian
RUN groupadd -g 61000 dev
RUN useradd -g 61000 -l -m -s /bin/false -u 61000 dev
USER dev
CMD ["bash"]
FROM java:8
EXPOSE 8080
ADD /target/juju-0.0.1.jar juju-0.0.1.jar
ENTRYPOINT ["java","-jar","juju-0.0.1.jar"]
How I've done it, I use Alpine not Ubuntu but it should work fine:
Creating and running as a user called "developer"
Dockerfile
RUN /bin/bash -c "adduser -D -u 1000 developer"
RUN passwd -d developer
RUN chown -R developer /home/developer/.bash*
RUN echo "developer ALL=(ALL) NOPASSWD: ALL" > /etc/sudoers.d/developer
ENTRYPOINT ["/entrypoint.sh"]
CMD ["bash"]
entrypoint.sh
# stuff I need running as root here. Then below runs a bash shell as "developer"
sudo -u developer -H bash -c "$#;"
I suppose you'll want to change your ENTRYPOINT to CMD or similar, or write it into your entrypoint.sh however you like to launch your java stuff.
The Dockerfile you show creates two images. The first one is a plain debian image with a non-root user. The second one ignores the first one and is a somewhat routine Java image.
You need to do these two steps in the same image. If I was going to write your Dockerfile it might look like
FROM java:8
EXPOSE 8080
# (Prefer COPY to ADD unless you explicitly want its
# auto-unpacking semantics.)
COPY /target/juju-0.0.1.jar juju-0.0.1.jar
# Set up a non-root user context, after COPYing content
# in. (Prevents the application from overwriting
# itself as a useful security measure.)
RUN groupadd -g 61000 app
RUN useradd -g 61000 -l -m -s /bin/false -u 61000 app
USER app
# Set the main container command. (Generally prefer
# CMD to ENTRYPOINT if you’re only using one; it makes
# both getting debugging shells and later adopting the
# pattern of an initializing entrypoint script easier.)
CMD ["java","-jar","juju-0.0.1.jar"]

Docker mounting volume. Permission denied

I have a problem with creating new files in mounted docker volume.
Firstly after installation docker i added my user to docker group.
sudo usermod -aG docker $USER
Created as my $USER folder:
mkdir -p /srv/redis
And starting container:
docker run -d -v /srv/redis:/data --name myredis redis
when i want to create file in /srv/redis as a user which created container I have a problem with access.
mkdir /srv/redis/redisTest
mkdir: cannot create directory ‘/srv/redis/redisTest’: Permission denied
I tried to search in other threads but i didn't find appropriate solution.
The question title does not reflect the real problem in my opinion.
mkdir /srv/redis/redisTest
mkdir: cannot create directory ‘/srv/redis/redisTest’: Permission denied
This problem occurs very likely because when you run:
docker run -d -v /srv/redis:/data --name myredis redis
the directory /srv/redis ownership changes to root. You can check that by
ls -lah /srv/redis
This is normal consequence of mounting external directory to docker. To regain access you have to run
sudo chown -R $USER /srv/redis
I think /srv/redis/redisTest directory is created by user inside redis container, so it belong to redis container user.
Have you already check using ls -l to see that /srv/redis/redisTest directory belong to $USER?
This could also be related (as I just found out) to having SELinux activated. This answer on the DevOps Stack Exchange worked for me:
The solution is to simply append a :z to the [docker] run volume argument so that this:
docker run -v /host/foobar:/src_dir /bin/bash
becomes this:
docker run -it -v /host/foobar:/src_dir:z /bin/bash

Why are my mounted docker volume files turning into folders inside the container?

The scenario is docker inside/beside docker via a sock binding for the purpose of having an easily deployable and scalable runner agent for C.I./C.D. tools (in this particular case, VSTS). The reason for this set up is that the various projects that I want to test use docker/compose to run tests, and configuring a C.I./C.D. worker to be compatible with docker/compose a bunch of times gets cumbersome and time consuming. (This'll eventually be deployed to 4+ Kubernetes Clusters)
Anyway, the problem:
Steps to replicate
Run the vsts-agent image
docker run \
-it \
-v /var/run/docker.sock:/var/run/docker.sock \
nullvoxpopuli/vsts-agent-with-aws-ecr:latest \
/bin/bash
Run another image (to emulate docker/compose running tests)
echo 'test' > test-file.txt
docker run -it -v file-test.txt:/file-test.txt busybox /bin/sh
Check for existence of test-file.txt
cd /
ls -la # shows that test-file.txt is a directory
So,
- why are files being mounted as folders inside containers?
- what do I need to do to make the volumes mount correctly?
Solution A - thanks to #BMitch
# On Host machine
docker run -it \
-v /var/run/docker.sock:/var/run/docker.sock \
-v /tmp/vsts/work/:/tmp/vsts/work \
nullvoxpopuli/vsts-agent-with-aws-ecr:latest \
/bin/bash
# In vsts-agent-with-aws-ecr
cd /tmp/vsts/work/
git clone https://NullVoxPopuli#bitbucket.org/group/project.git
cd project/
./scripts/run/eslint.sh
# Success! (this uses docker-compose to map files to the node-based docker image)
Docker creates containers and mounts volumes from the docker host. Any time a file or directory in a volume mount doesn't exist, it gets initialized as an empty directory. So if you are running docker commands from inside of a container to the docker socket those commands get interpreted outside the container on the docker host, where the file doesn't exist. Additionally, the docker run command requires a full path to the volume being mounted when you want a host volume, otherwise it's interpreted as a named volume.
What you likely want to do at this point is:
docker volume rm file-test.txt
docker run -it -v $(pwd)/file-test.txt:/file-test.txt busybox /bin/sh
If instead you are trying to include a file from inside the container to another container, you can initialize a named volume with input redirection like this:
tar -cC . . | docker run -i --rm -v file-test:/target busybox tar -xC /target
docker run -it -v file-test:/data busybox /bin/sh
That uses tar to copy the contents of the current directory to stdout which is processed by the interactive docker command which then extracts those directory contents into /target inside the container which is a named volume. Note that I didn't mount the volume in root in this second example since named volumes are directories and I didn't want to replace the root filesystem.
Another option is to share a volume mount point between multiple containers on the docker host so that files you edit inside one container go to the host where they are mounted into the other container and visible there:
docker run \
-it \
-v /var/run/docker.sock:/var/run/docker.sock \
-v /container-data:/container-data \
nullvoxpopuli/vsts-agent-with-aws-ecr:latest \
/bin/bash
echo 'test' > /container-data/test-file.txt
docker run -it -v /container-data:/container-data busybox /bin/sh
I don't recommend mounting individual files into a container if these files may be modified while the container is running. File changes often result in a changed inode and docker will have the old inode mounted into the container. As a result, changes either inside or outside of the container to the file may not be seen on the other side, and if you modify the file inside the container, that change may be lost when you delete the container. The solution to the inode issue is to mount the entire directory into the container.

Why I can't touch file when the docker image has volume?

I have a mybase:latest image like this:
FROM ubuntu:latest
VOLUME /var
Then I encountered an error when docker run:
docker run -it mybase:latest mkdir -p /var/test && touch /var/test/test.txt
touch: cannot touch ‘/var/test/test.txt’: No such file or directory
I noticed this question: Building Dockerfile fails when touching a file after a mkdir
But it did not solve my problem as it said:
You can only create files there while the container is running
I think during Docker creating that container, mkdir -p /var/test && touch /var/test/test.txt is executed after all the volumes are ready, so it should work.
Where is worry about my thought?
Maybe the && part isn't in the same shell as the one created for the container. (But is actually the shell where you type the docker run command)
Try:
docker run -it mybase:latest sh -c 'mkdir -p /var/test && touch /var/test/test.txt'
That way at least, the && part applies to the shell of the mkdir command.

Resources