I thought I understood the docs, but maybe I didn't. I was under the impression that the -v /HOST/PATH:/CONTAINER/PATH flag is bi-directional. If we have file or directories in the container, they would be mirrored on the host giving us a way to retain the directories and files even after removing a docker container.
In the official MySQL docker images, this works. The /var/lib/mysql can be bound to the host and survive restarts and replacement of container while maintaining the data on the host.
I wrote a docker file for sphinxsearch-2.2.9 just as a practice and for the sake of learning and understanding, here it is:
FROM debian
ENV SPHINX_VERSION=2.2.9-release
RUN apt-get update -qq && DEBIAN_FRONTEND=noninteractive apt-get install -yqq\
build-essential\
wget\
curl\
mysql-client\
libmysql++-dev\
libmysqlclient15-dev\
checkinstall
RUN wget http://sphinxsearch.com/files/sphinx-${SPHINX_VERSION}.tar.gz && tar xzvf sphinx-${SPHINX_VERSION}.tar.gz && rm sphinx-${SPHINX_VERSION}.tar.gz
RUN cd sphinx-${SPHINX_VERSION} && ./configure --prefix=/usr/local/sphinx
EXPOSE 9306 9312
RUN cd sphinx-${SPHINX_VERSION} && make
RUN cd sphinx-${SPHINX_VERSION} && make install
RUN rm -rf sphinx-${SPHINX_VERSION}
VOLUME /usr/local/sphinx/etc
VOLUME /usr/local/sphinx/var
Very simple and easy to get your head wrapped around while learning. I am assigning the /etc & /var directories from the sphinx build to the VOLUME command thinking that it will allow me to do something like -v ~/dev/sphinx/etc:/usr/local/sphinx/etc -v ~/dev/sphinx/var:/usr/local/sphinx/var, but it's not, instead it's overwriting the directories inside the container and leaving them blank. When i remove the -v flags and create the container, the directories have the expected files and they are not overwritten.
This is what I run to create the docker file after navigating to the directory that it's in: docker build -t sphinxsearch .
And once I have that created, I do the following to create a container based on that image: docker run -it --hostname some-sphinx --name some-sphinx --volume ~/dev/docker/some-sphinx/etc:/usr/local/sphinx/etc -d sphinxsearch
I really would appreciate any help and insight on how to get this to work. I looked at the MySQL images and don't see anything magical that they did to make the directory bindable, they used VOLUME.
Thank you in advance.
After countless hours of research, I decided to extend my image with the following Dockerfile:
FROM sphinxsearch
VOLUME /usr/local/sphinx/etc
VOLUME /usr/local/sphinx/var
RUN mkdir -p /sphinx && cd /sphinx && cp -avr /usr/local/sphinx/etc . && cp -avr /usr/local/sphinx/var .
ADD docker-entrypoint.sh /
RUN chmod +x /docker-entrypoint.sh
ENTRYPOINT ["/docker-entrypoint.sh"]
Extending it benefited it me in that I didn't have to build the entire image from scratch as I was testing, and only building the parts that were relevant.
I created an ENTRYPOINT to execute a bash script that would copy the files back to the required destination for sphinx to run properly, here is that code:
#!/bin/sh
set -e
target=/usr/local/sphinx/etc
# check if directory exists
if [ -d "$target" ]; then
# check if we have files
if find "$target" -mindepth 1 -print -quit | grep -q .; then
# no files don't do anything
# we may use this if condition for something else later
echo not empty, don\'t do anything...
else
# we don't have any files, let's copy the
# files from etc and var to the right locations
cp -avr /sphinx/etc/* /usr/local/sphinx/etc && cp -avr /sphinx/var/* /usr/local/sphinx/var
fi
else
# directory doesn't exist, we will have to do something here
echo need to creates the directory...
fi
exec "$#"
Having access to the /etc & /var directories on the host allows me to adjust the files while keeping them preserved on the host in between restarts and so forth... I also have the data saved on the host which should survive the restarts.
I know it's a debated topic on data containers vs. storing on the host, at this moment I am leaning towards storing on the host, but will try the other method later. If anyone has any tips, advice, etc... to improve what I have or a better way, please share.
Thank you #h3nrik for suggestions and for offering help!
Mounting container directories to the host is against the docker concepts. That would break the process/resources encapsulation principle.
The other way around - mounting a host folder into a container - is possible. But I would rather suggest to use volume containers, instead.
because mysql do init After the mapping,so before mapping there have no data at /var/lib/mysql.
so if you have data before start container, the -v action will override your data.
see entrypoint.sh
Related
the only difference between them is that the "dev" folder exists in centos image,
check the comment in this piece of code(while executing docker build),appreciate it if anyone can explain why?
FROM centos:latest
LABEL maintainer="xxxx"
RUN dnf clean packages
RUN dnf -y install sudo openssh-server openssh-clients curl vim lsof unzip zip
**below works well!**
# RUN mkdir -p oop/script
# RUN cd oop/script
# ADD text.txt /oop/script
**/bin/sh: line 0: cd: dev/script: No such file or directory**
RUN mkdir -p dev/script
RUN cd dev/script
ADD text.txt /dev/script
EXPOSE 22
There are two things going on here.
The root of your problem is that /dev is a special directory, and is re-created for each RUN command. So while RUN mkdir -p dev/script successfully creates a /dev/script directory, that directory is gone once the RUN command is complete.
Additionally, a command like this...
RUN cd /some/directory
...is a complete no-op. This is exactly the same thing as running sh -c "cd /some/directory" on your local system; while the cd is successful, the cd only affects the process running the cd command, and has no effect on the parent process or subsequent commands.
If you really need to place something into /dev, you can copy it into a different location in your Dockerfile (e.g., COPY test.txt /docker/test.txt), and then at runtime via your CMD or ENTRYPOINT copy it into an appropriate location in /dev.
When I pass volume like -v /dir:/dir it works like it should
But when I use VOLUME in my dockerfile it gets mountend empty
My Dockerfile looks like this
FROM ubuntu:16.04
RUN apt-get update
RUN apt-get install nano
ENV Editor="/usr/bin/nano"
ARG UID=1000
RUN useradd -u "$UID" -G root writer
RUN mkdir -p "/home/writer" && chown -R "$UID":1000 "/home/writer"
RUN mkdir -p "/home/stepik"
RUN chown -R "$UID":1000 "/home/stepik"
VOLUME ["/home/stepik"]
USER writer
WORKDIR /home/stepik
ENTRYPOINT ["bash"]
Defining the volume on the Dockerfile only tells docker that the volume needs to exist inside the container, not where to get the volume from. It's the same as passing the option -v /dir instead of -v /dir:/dir. The result is an "anonymous" volume with a guid you can see in docker volume ls. You can't pass the option inside the Dockerfile to identify where to mount the volume from, images you pull from the docker hub can't mount an arbitrary directory from your host and send the contents of that directory to a black hat machine on the internet by design.
Note that I don't recommend defining volumes inside the Dockerfile. See my blog post on the topic for more details.
I want to get some files when dockerfile built successfully, but it doesn't copy files from container to host.
That means when I have built dockerfile, the files already be host.
This is now possible since Docker 19.03.0 in July 2019 introduced "custom build outputs". See the official docs about custom build outputs.
To enable custom build outputs from the build image into the host during the build process, you need to activate the BuildKit which is a newer recommended back-compatible way for the engine to do the build phase. See the official docs for enabling BuildKit.
This can be done in 2 ways:
Set the environment variable DOCKER_BUILDKIT=1, or
Set it in the docker engine by default by adding "features": { "buildkit": true } to the root of the config json.
From the official docs about custom build outputs:
custom exporters allow you to export the build artifacts as files on the local filesystem instead of a Docker image, which can be useful for generating local binaries, code generation etc.
...
The local exporter writes the resulting build files to a directory on the client side. The tar exporter is similar but writes the files as a single tarball (.tar).
If no type is specified, the value defaults to the output directory of the local exporter.
...
The --output option exports all files from the target stage. A common pattern for exporting only specific files is to do multi-stage builds and to copy the desired files to a new scratch stage with COPY --from.
e.g. an example Dockerfile
FROM alpine:latest AS stage1
WORKDIR /app
RUN echo "hello world" > output.txt
FROM scratch AS export-stage
COPY --from=stage1 /app/output.txt .
Running
DOCKER_BUILDKIT=1 docker build --file Dockerfile --output out .
The tail of the output is:
=> [export-stage 1/1] COPY --from=stage1 /app/output.txt .
0.0s
=> exporting to client
0.1s
=> => copying files 45B
0.1s
This produces a local file out/output.txt that was created by the RUN command.
$ cat out/output.txt
hello world
All files are output from the target stage
The --output option will export all files from the target stage. So using a non-scratch stage with COPY --from will cause extraneous files to be copied to the output. The recommendation is to use a scratch stage with COPY --from.
Copying files "from the Dockerfile" to the host is not supported. The Dockerfile is just a recipe specifying how to build an image.
When you build, you have the opportunity to copy files from host to the image you are building (with the COPY directive or ADD)
You can also copy files from a container (an image that has been docker run'd) to the host with docker cp (atually, the cp can copy from the host to the container as well)
If you want to get back to your host some files that might have been generated during the build (like for example calling a script that generates ssl), you can run a container, mounting a folder from your host and executing cp commands.
See for example this getcrt script.
docker run -u root --entrypoint=/bin/sh --rm -i -v ${HOME}/b2d/apache:/apache apache << COMMANDS
pwd
cp crt /apache
cp key /apache
echo Changing owner from \$(id -u):\$(id -g) to $(id -u):$(id -u)
chown -R $(id -u):$(id -u) /apache/crt
chown -R $(id -u):$(id -u) /apache/key
COMMANDS
Everything between COMMANDS are commands executed on the container, including cp ones which are copying on the host ${HOME}/b2d/apache folder, mounted within the container as /apache with -v ${HOME}/b2d/apache:/apache.
That means each time you copy anything on /apache in the container, you are actually copying in ${HOME}/b2d/apache on the host!
Although it's not directly supported via the Dockerfile functionality, you can copy files from a built docker image.
containerId=$(docker create example:latest)
docker cp "$containerId":/source/path /destination/path
docker rm "$containerId"
Perhaps you could mount a folder from the host as a volume and put the files there? Usually mounted volumes could be written to from within the container, unless you specify read-only option while mounting.
docker run -v ${PWD}:/<somewritablepath> -w <somewritablepath> <image> <action>
On completion of Docker Build using Dockerfile, following worked for me :
where /var/lib/docker/ is Docker Root Dir: at my setup.
sudo find /var/lib/docker/ -name DESIRED_FILE -type f | xargs sudo ls -hrt | tail -1 | grep tgz | xargs -i sudo cp -v '{}' PATH_ON_HOST
You can also use the network. Starting with the script from this answer: https://stackoverflow.com/a/58255859/836450
I run this script on my host in the folder I want my output to go
Inside my docker I have:
RUN apt-get install -y curl
RUN curl -F 'file=#/path/to/file' http://172.17.0.1:44444/
Note, for me at least, 172.17.0.1 is always the IP of the host while building the dockerfile
I am using the docker-solr image with docker, and I need to mount a directory inside it which I achieve using the -v flag.
The problem is that the container needs to write to the directory that I have mounted into it, but doesn't appear to have the permissions to do so unless I do chmod 777 on the entire directory. I don't think setting the permission to allows all users to read and write to it is the solution, but just a temporary workaround.
Can anyone guide me in finding a more canonical solution?
Edit: I've been running docker without sudo because I added myself to the docker group. I just found that the problem is solved if I run docker with sudo, but I am curious if there are any other solutions.
More recently, after looking through some official docker repositories I've realized the more idiomatic way to solve these permission problems is using something called gosu in tandem with an entry point script. For example if we take an existing docker project, for example solr, the same one I was having trouble with earlier.
The dockerfile on Github very effectively builds the entire project, but does nothing to account for the permission problems.
So to overcome this, first I added the gosu setup to the dockerfile (if you implement this notice the version 1.4 is hardcoded. You can check for the latest releases here).
# grab gosu for easy step-down from root
RUN mkdir -p /home/solr \
&& gpg --keyserver pool.sks-keyservers.net --recv-keys B42F6819007F00F88E364FD4036A9C25BF357DD4 \
&& curl -o /usr/local/bin/gosu -SL "https://github.com/tianon/gosu/releases/download/1.4/gosu-$(dpkg --print-architecture)" \
&& curl -o /usr/local/bin/gosu.asc -SL "https://github.com/tianon/gosu/releases/download/1.4/gosu-$(dpkg --print-architecture).asc" \
&& gpg --verify /usr/local/bin/gosu.asc \
&& rm /usr/local/bin/gosu.asc \
&& chmod +x /usr/local/bin/gosu
Now we can use gosu, which is basically the exact same as su or sudo, but works much more nicely with docker. From the description for gosu:
This is a simple tool grown out of the simple fact that su and sudo have very strange and often annoying TTY and signal-forwarding behavior.
Now the other changes I made to the dockerfile were these adding these lines:
COPY solr_entrypoint.sh /sbin/entrypoint.sh
RUN chmod 755 /sbin/entrypoint.sh
ENTRYPOINT ["/sbin/entrypoint.sh"]
just to add my entrypoint file to the docker container.
and removing the line:
USER $SOLR_USER
So that by default you are the root user. (which is why we have gosu to step-down from root).
Now as for my own entrypoint file, I don't think it's written perfectly, but it did the job.
#!/bin/bash
set -e
export PS1="\w:\u docker-solr-> "
# step down from root when just running the default start command
case "$1" in
start)
chown -R solr /opt/solr/server/solr
exec gosu solr /opt/solr/bin/solr -f
;;
*)
exec $#
;;
esac
A docker run command takes the form:
docker run <flags> <image-name> <passed in arguments>
Basically the entrypoint says if I want to run solr as per usual we pass the argument start to the end of the command like this:
docker run <flags> <image-name> start
and otherwise run the commands you pass as root.
The start option first gives the solr user ownership of the directories and then runs the default command. This solves the ownership problem because unlike the dockerfile setup, which is a one time thing, the entry point runs every single time.
So now if I mount directories using the -d flag, before the entrypoint actually runs solr, it will chown the files inside of the docker container for you.
As for what this does to your files outside the container I've had mixed results because docker acts a little weird on OSX. For me, it didn't change the files outside of the container, but on another OS where docker plays more nicely with the filesystem, it might change your files outside, but I guess that's what you'll have to deal with if you want to mount files inside the container instead of just copying them in.
I have a mybase:latest image like this:
FROM ubuntu:latest
VOLUME /var
Then I encountered an error when docker run:
docker run -it mybase:latest mkdir -p /var/test && touch /var/test/test.txt
touch: cannot touch ‘/var/test/test.txt’: No such file or directory
I noticed this question: Building Dockerfile fails when touching a file after a mkdir
But it did not solve my problem as it said:
You can only create files there while the container is running
I think during Docker creating that container, mkdir -p /var/test && touch /var/test/test.txt is executed after all the volumes are ready, so it should work.
Where is worry about my thought?
Maybe the && part isn't in the same shell as the one created for the container. (But is actually the shell where you type the docker run command)
Try:
docker run -it mybase:latest sh -c 'mkdir -p /var/test && touch /var/test/test.txt'
That way at least, the && part applies to the shell of the mkdir command.