Why I can't touch file when the docker image has volume? - linux

I have a mybase:latest image like this:
FROM ubuntu:latest
VOLUME /var
Then I encountered an error when docker run:
docker run -it mybase:latest mkdir -p /var/test && touch /var/test/test.txt
touch: cannot touch ‘/var/test/test.txt’: No such file or directory
I noticed this question: Building Dockerfile fails when touching a file after a mkdir
But it did not solve my problem as it said:
You can only create files there while the container is running
I think during Docker creating that container, mkdir -p /var/test && touch /var/test/test.txt is executed after all the volumes are ready, so it should work.
Where is worry about my thought?

Maybe the && part isn't in the same shell as the one created for the container. (But is actually the shell where you type the docker run command)
Try:
docker run -it mybase:latest sh -c 'mkdir -p /var/test && touch /var/test/test.txt'
That way at least, the && part applies to the shell of the mkdir command.

Related

docker RUN mkdir does not work when folder exist in prev image

the only difference between them is that the "dev" folder exists in centos image,
check the comment in this piece of code(while executing docker build),appreciate it if anyone can explain why?
FROM centos:latest
LABEL maintainer="xxxx"
RUN dnf clean packages
RUN dnf -y install sudo openssh-server openssh-clients curl vim lsof unzip zip
**below works well!**
# RUN mkdir -p oop/script
# RUN cd oop/script
# ADD text.txt /oop/script
**/bin/sh: line 0: cd: dev/script: No such file or directory**
RUN mkdir -p dev/script
RUN cd dev/script
ADD text.txt /dev/script
EXPOSE 22
There are two things going on here.
The root of your problem is that /dev is a special directory, and is re-created for each RUN command. So while RUN mkdir -p dev/script successfully creates a /dev/script directory, that directory is gone once the RUN command is complete.
Additionally, a command like this...
RUN cd /some/directory
...is a complete no-op. This is exactly the same thing as running sh -c "cd /some/directory" on your local system; while the cd is successful, the cd only affects the process running the cd command, and has no effect on the parent process or subsequent commands.
If you really need to place something into /dev, you can copy it into a different location in your Dockerfile (e.g., COPY test.txt /docker/test.txt), and then at runtime via your CMD or ENTRYPOINT copy it into an appropriate location in /dev.

Docker build says directory already exists but it is not

In dockerfile I am creating directory /var/log/nginx since it didn't exist in the container even though nginx.conf is set to save logs in /var/log/nginx
However, the docker build failed saying the directory /var/log/nginx already exists. But it doesn't.
Docker build error:
root#jsd-user-management:~/flask# docker build -t flask_app .
Sending build context to Docker daemon 716.8kB
Step 1/6 : FROM tiangolo/uwsgi-nginx-flask:python3.5
---> dea8fea96656
Step 2/6 : RUN mkdir /var/log/nginx
---> Running in 9e9ff86747a7
mkdir: cannot create directory ‘/var/log/nginx’: File exists
The command '/bin/sh -c mkdir /var/log/nginx' returned a non-zero code: 1
Inside Docker container:
root#jsd-user-management:~/flask# docker exec -it flask_jsd-user-management_1 bash
root#c6d43f610a51:/app/app# ls /var/log
root#c6d43f610a51:/app/app#
Surprisingly enough, when I attempt to create the directory from inside the container, it works. However, no logs are populated inside it.
You can use -p flag.
-p, --parents
no error if existing, make parent directories as needed
RUN mkdir -p /var/log/nginx
RUN ls /var/log/
It might be case parent director missing.

Docker: mount image's original /docker-entrypoint.sh to a volume in read/write mode

I try to mount image's original /docker-entrypoint.sh to a volume in read/write mode, in order to be able to change it easily from outside (without entering the container) and then restart the container to observe the changes.
I do it (in ansible) like this:
/app/docker-entrypoint.sh:/docker-entrypoint.sh:rw
If /app/docker-entrypoint.sh doesn't exist on the host, a directory /app/docker-entrypoint.sh (not a file, as wish) is created, and I get following error:
Error starting container e40a90eef1525f554e6078e20b3ab5d1c4b27ad2a7d73aa3bf4f7c6aa337be4f: 400 Client Error: Bad Request (\"OCI runtime create failed: container_linux.go:348: starting container process caused \"process_linux.go:402: container init caused \\\"rootfs_linux.go:58: mounting \\\\\\\"/app/docker-entrypoint.sh\\\\\\\" to rootfs \\\\\\\"/var/lib/docker/devicemapper/mnt/4d3e4f4082ca621ab5f3a4ec3f810b967634b1703fd71ec00b4860c59494659a/rootfs\\\\\\\" at \\\\\\\"/var/lib/docker/devicemapper/mnt/4d3e4f4082ca621ab5f3a4ec3f810b967634b1703fd71ec00b4860c59494659a/rootfs/docker-entrypoint.sh\\\\\\\" caused \\\\\\\"not a directory\\\\\\\"\\\"\": unknown: Are you trying to mount a directory onto a file (or vice-versa)? Check if the specified host path exists and is the expected type
If I touch /app/docker-entrypoint.sh (and set proper permissions) before launching the container - the container fails to start up and keeps restarting (I assume because the /app/docker-entrypoint.sh and therefore internal /docker-entrypoint.sh are empty).
How can I mount the original content of container's /docker-entrypoint.sh to the outside?
If you want to override docker-entry point it should be executable or in other words you have to set chmod +x your_mount_entrypoint.sh in the host then you can mount otherwise it will through permission error. As entrypoint script should be executable.
Second thing, As mentioned in the comment you can mount the file better to keep the entrypoint script in directory like docker-entrypoint/entrypoint.sh.
or if you want to mount specific file then both name should be same otherwise entrypoint script will not be overridden.
docker run --name test -v $PWD/entrypoint.sh:/docker-entrypoint/entrypoint.sh --rm my_image
or
docker run --name test -v $PWD/entrypoint.sh:/docker-entrypoint/entrypoint.sh:rw --rm my_image
See this example, entrypoint generated inside dockerfile and you can overide this from any script but it should be executable and should be mount to docker-entrypoint.
Dockerfile
FROM alpine
RUN mkdir -p /docker-entrypoint
RUN echo -e $'#!/bin/sh \n\
echo "hello from docker generated entrypoint" >> /test.txt \n\
tail -f /test.txt ' >> /docker-entrypoint/entrypoint.sh
RUN chmod +x /docker-entrypoint/entrypoint.sh
ENTRYPOINT ["/docker-entrypoint/entrypoint.sh"]
if you build and run it you will
docker build -t my_image .
docker run -t --rm my_image
#output
hello from docker generated entrypoint
Now if you want to overide
Create and set permission
host_path/entrypoint/entrypoint.sh
for example entrypoint.sh
#!/bin/sh
echo "hello from entrypoint using mounted"
Now run
docker run --name test -v $PWD/:/docker-entrypoint/ --rm my_image
#output
hello from entrypoint using mounted
Update:
If you mount directory of the host it will hide the content of docker image.
The workaround
Mount some directory other then entrypoint name it backup
add instruction in entrypoint to copy entrypoint to that location at run time
So it will create new file on the host directory instead
FROM alpine
RUN mkdir -p /docker-entrypoint
RUN touch /docker-entrypoint/entrypoint.sh
RUN echo -e $'#!/bin/sh \n\
echo "hello from entrypoint" \n\
cp /docker-entrypoint/entrypoint.sh /for_hostsystem/ ' >> /docker-entrypoint/entrypoint.sh
RUN chmod +x /docker-entrypoint/entrypoint.sh
ENTRYPOINT ["/docker-entrypoint/entrypoint.sh"]
Now if you run you will have the docker entrypoint in the host, as opposit as you want
docker run --name test -v $PWD/:/for_hostsystem/ --rm my_image

Sharing Docker Process Executables with hosts [duplicate]

I thought I understood the docs, but maybe I didn't. I was under the impression that the -v /HOST/PATH:/CONTAINER/PATH flag is bi-directional. If we have file or directories in the container, they would be mirrored on the host giving us a way to retain the directories and files even after removing a docker container.
In the official MySQL docker images, this works. The /var/lib/mysql can be bound to the host and survive restarts and replacement of container while maintaining the data on the host.
I wrote a docker file for sphinxsearch-2.2.9 just as a practice and for the sake of learning and understanding, here it is:
FROM debian
ENV SPHINX_VERSION=2.2.9-release
RUN apt-get update -qq && DEBIAN_FRONTEND=noninteractive apt-get install -yqq\
build-essential\
wget\
curl\
mysql-client\
libmysql++-dev\
libmysqlclient15-dev\
checkinstall
RUN wget http://sphinxsearch.com/files/sphinx-${SPHINX_VERSION}.tar.gz && tar xzvf sphinx-${SPHINX_VERSION}.tar.gz && rm sphinx-${SPHINX_VERSION}.tar.gz
RUN cd sphinx-${SPHINX_VERSION} && ./configure --prefix=/usr/local/sphinx
EXPOSE 9306 9312
RUN cd sphinx-${SPHINX_VERSION} && make
RUN cd sphinx-${SPHINX_VERSION} && make install
RUN rm -rf sphinx-${SPHINX_VERSION}
VOLUME /usr/local/sphinx/etc
VOLUME /usr/local/sphinx/var
Very simple and easy to get your head wrapped around while learning. I am assigning the /etc & /var directories from the sphinx build to the VOLUME command thinking that it will allow me to do something like -v ~/dev/sphinx/etc:/usr/local/sphinx/etc -v ~/dev/sphinx/var:/usr/local/sphinx/var, but it's not, instead it's overwriting the directories inside the container and leaving them blank. When i remove the -v flags and create the container, the directories have the expected files and they are not overwritten.
This is what I run to create the docker file after navigating to the directory that it's in: docker build -t sphinxsearch .
And once I have that created, I do the following to create a container based on that image: docker run -it --hostname some-sphinx --name some-sphinx --volume ~/dev/docker/some-sphinx/etc:/usr/local/sphinx/etc -d sphinxsearch
I really would appreciate any help and insight on how to get this to work. I looked at the MySQL images and don't see anything magical that they did to make the directory bindable, they used VOLUME.
Thank you in advance.
After countless hours of research, I decided to extend my image with the following Dockerfile:
FROM sphinxsearch
VOLUME /usr/local/sphinx/etc
VOLUME /usr/local/sphinx/var
RUN mkdir -p /sphinx && cd /sphinx && cp -avr /usr/local/sphinx/etc . && cp -avr /usr/local/sphinx/var .
ADD docker-entrypoint.sh /
RUN chmod +x /docker-entrypoint.sh
ENTRYPOINT ["/docker-entrypoint.sh"]
Extending it benefited it me in that I didn't have to build the entire image from scratch as I was testing, and only building the parts that were relevant.
I created an ENTRYPOINT to execute a bash script that would copy the files back to the required destination for sphinx to run properly, here is that code:
#!/bin/sh
set -e
target=/usr/local/sphinx/etc
# check if directory exists
if [ -d "$target" ]; then
# check if we have files
if find "$target" -mindepth 1 -print -quit | grep -q .; then
# no files don't do anything
# we may use this if condition for something else later
echo not empty, don\'t do anything...
else
# we don't have any files, let's copy the
# files from etc and var to the right locations
cp -avr /sphinx/etc/* /usr/local/sphinx/etc && cp -avr /sphinx/var/* /usr/local/sphinx/var
fi
else
# directory doesn't exist, we will have to do something here
echo need to creates the directory...
fi
exec "$#"
Having access to the /etc & /var directories on the host allows me to adjust the files while keeping them preserved on the host in between restarts and so forth... I also have the data saved on the host which should survive the restarts.
I know it's a debated topic on data containers vs. storing on the host, at this moment I am leaning towards storing on the host, but will try the other method later. If anyone has any tips, advice, etc... to improve what I have or a better way, please share.
Thank you #h3nrik for suggestions and for offering help!
Mounting container directories to the host is against the docker concepts. That would break the process/resources encapsulation principle.
The other way around - mounting a host folder into a container - is possible. But I would rather suggest to use volume containers, instead.
because mysql do init After the mapping,so before mapping there have no data at /var/lib/mysql.
so if you have data before start container, the -v action will override your data.
see entrypoint.sh

Dockerfile VOLUME not working while -v works

When I pass volume like -v /dir:/dir it works like it should
But when I use VOLUME in my dockerfile it gets mountend empty
My Dockerfile looks like this
FROM ubuntu:16.04
RUN apt-get update
RUN apt-get install nano
ENV Editor="/usr/bin/nano"
ARG UID=1000
RUN useradd -u "$UID" -G root writer
RUN mkdir -p "/home/writer" && chown -R "$UID":1000 "/home/writer"
RUN mkdir -p "/home/stepik"
RUN chown -R "$UID":1000 "/home/stepik"
VOLUME ["/home/stepik"]
USER writer
WORKDIR /home/stepik
ENTRYPOINT ["bash"]
Defining the volume on the Dockerfile only tells docker that the volume needs to exist inside the container, not where to get the volume from. It's the same as passing the option -v /dir instead of -v /dir:/dir. The result is an "anonymous" volume with a guid you can see in docker volume ls. You can't pass the option inside the Dockerfile to identify where to mount the volume from, images you pull from the docker hub can't mount an arbitrary directory from your host and send the contents of that directory to a black hat machine on the internet by design.
Note that I don't recommend defining volumes inside the Dockerfile. See my blog post on the topic for more details.

Resources