Assume that i have an application with this simple Dockerfile:
//...
RUN configure.sh --logmyfiles /var/lib/myapp
ENTRYPOINT ["starter.sh"]
CMD ["run"]
EXPOSE 8080
VOLUME ["/var/lib/myapp"]
And I run a container from that:
sudo docker run -d --name myapp -p 8080:8080 myapp:latest
So it works properly and stores some logs in /var/lib/myapp of docker container.
My question
I need these log files to automatically saved in host too, So how can i mount the /var/lib/myapp from the container to the /var/lib/myapp in host server (without removing current container) ?
Edit
I also see Docker - Mount Directory From Container to Host, but it doesn't solve my problem i need a way to backup my files from docker to host.
First, a little information about Docker volumes. Volume mounts occur only at container creation time. That means you cannot change volume mounts after you've started the container. Also, volume mounts are one-way only: From the host to the container, and not vice-versa. When you specify a host directory mounted as a volume in your container (for example something like: docker run -d --name="foo" -v "/path/on/host:/path/on/container" ubuntu), it is a "regular ole" linux mount --bind, which means that the host directory will temporarily "override" the container directory. Nothing is actually deleted or overwritten on the destination directory, but because of the nature of containers, that effectively means it will be overridden for the lifetime of the container.
So, you're left with two options (maybe three). You could mount a host directory into your container and then copy those files in your startup script (or if you bring cron into your container, you could use a cron to periodically copy those files to that host directory volume mount).
You could also use docker cp to move files from your container to your host. Now that is kinda hacky and definitely not something you should use in your infrastructure automation. But it does work very well for that exact purpose. One-off or debugging is a great situation for that.
You could also possibly set up a network transfer, but that's pretty involved for what you're doing. However, if you want to do this regularly for your log files (or whatever), you could look into using something like rsyslog to move those files off your container.
So how can i mount the /var/lib/myapp from the container to the /var/lib/myapp in host server
That is the opposite: you can mount an host folder to your container on docker run.
(without removing current container)
I don't think so.
Right now, you can check docker inspect <containername> and see if you see your log in the /var/lib/docker/volumes/... associated to the volume from your container.
Or you can redirect the result of docker logs <containername> to an host file.
For more example, see this gist.
The alternative would be to mount a host directory as the log folder and then access the log files directly on the host.
me#host~$ docker run -d -p 80:80 -v <sites-enabled-dir>:/etc/nginx/sites-enabled -v <certs-dir>:/etc/nginx/certs -v <log-dir>:/var/log/nginx dockerfile/nginx
me#host~$ ls <log-dir>
(again, that apply to a container that you start, not an existing running one)
I am using an image academind/node-example-1 which is a simple node image. You can check it https://hub.docker.com/r/academind/node-example-1 here. What I want while I run the image I want to get the same folder & file structure that is there in the Image. I know that can be done via volume. When I use like
docker run -d --rm --name node-test -p 5000:80 academind/node-example-1
Everything is proper But I want to get the codebase while running, so I tried like
docker run -d --rm --name node-test -p 5000:80 -v /Users/souravbanerjee/Documents/node-docker:node-example-1 academind/node-example-1
Here node-docker is my local folder where I expect the code to be. It runs but not getting the files in the local machine, I'm in doubt here where the source_path:destination_path. Please correct me to please tell me where I'm wrong, or what to do, or my entire thinking is going in the wrong direction or not.
Thanks.
If you read the official doc, you'll see that the first part of the : should be the path somewhere in the host machine (which you're doing), while the later part should match the path "inside" the container (instead you're using image name). Assuming /app being the path (I've taken that course by myself and this is the path AFAIR), it should be:
docker run -d --rm --name node-test -p 5000:80 -v /Users/souravbanerjee/Documents/node-docker:/app academind/node-example-1
I think the correct syntax is to enter the drive information in the volume mapping.
Eg. Users/xxxx/:some/drive/location
However, that would map your empty drive at xxxx over the top of 'location' folder. Thus deleting the existing files in the container.
If you are interested in seeing the contents of the files in the container, you should consider using 'Docker CP' command.
People often use volume mounts to push data (i.e. persistent database files) into a container.
Alternatively, writing log files to the volume mounted location inside the application container. Then those files are then reflected on your local drive
You can copy the files to the current host directory using the command
docker cp node-test:/app .
when the container is running
With the below command:
docker container run -dit --name testcontainer –mount source= ubervol, target=/vol alpine:latest
source mount point name is ubervol pointing to target /vol that resides within container as shown below:
user#machine:~$ docker container exec -it b4fd sh
/ # pwd
/
/ # ls vol
vol
ubervol sits outside the container in /var/lib/docker/volumes/ubervol path of host machine(hosting docker daemon)
With the below Dockerfile:
# Create the folders and volume mount points
RUN mkdir -p /var/log/jenkins
RUN chown -R jenkins:jenkins /var/log/jenkins
RUN mkdir -p /var/jenkins_home
RUN chown -R jenkins:jenkins /var/jenkins_home
VOLUME ["/var/log/jenkins", "/var/jenkins_home"]
my understanding is, target is sitting within container with path /var/log/jenkins
&& /var/jenkins_home
What is the source mount point name? for each target(/var/log/jenkins && /var/jenkins_home)
What is the path of this mount point name in host machine?
The location of the volume data on the host is an implementation detail that you shouldn't try to take advantage of. On some environments, like the Docker Desktop for Mac application, the data will be hidden away inside a virtual machine you can't directly access. While I've rarely encountered one there are also alternate volume drivers that would let you store the content somewhere else.
Every time you docker run a container based on an image that declares a VOLUME, if you don't mount something else with a -v option, Docker will create an anonymous volume and mount it there for you (in the same way as if you didn't specify a --mount source=...). If you start multiple containers from the same image, I believe each gets a new volume (with a different host path, if there is one). The Dockerfile cannot control the location of the volume on the host; the operator could mount a different named volume or a host directory instead.
In practice there's almost no point to using VOLUME in a Dockerfile. You can use docker run -v whether or not here's a VOLUME for the same directory. Its principal effect is to prevent future RUN commands from modifying that directory.
I'm pretty new to docker containers. I understand there are ADD and COPY operations so a container can see files. How does one give the container access to a given directory where I can put my datasets?
Let's say I have a /home/username/dataset directory how do I make it at /dataset or something in the docker container so I can reference it?
Is there a way for the container to reference a directory on the main system so you don't have to have duplicate files. Some of these datasets will be quite large and while I can delete the original after copying it over .. that's just annoying if I want to do something outside the docker container with the files.
You cannot do that during the build time. If you want to do it during build time then you need to copy it into the context
Or else when you run the container you need to do a volume bind mount
docker run -it -v /home/username/dataset:/dataset <image>
Directories on host can be mapped to directories inside container.
If you are using docker run to start your container, then you can include -v flag to include volumes.
docker run --rm -v "/home/username/dataset:/dataset" <image_name>
If you are using a compose file, you may include volumes using:
volumes:
- /home/<username>/dataset:/dataset
For a detailed description of how to use volumes, you may visit Use volumes in docker
Docker documentation says that it's possible to mount a single file into a Docker container:
The -v flag can also be used to mount a single file - instead of just directories - from the host machine.
$ docker run --rm -it -v ~/.bash_history:/.bash_history ubuntu /bin/bash
This will drop you into a bash shell in a new container, you will have your bash history from the host and when you exit the container, the host will have the history of the commands typed while in the container.
When I try that however the file mounts as a directory:
tom#u ~/project $ docker run --rm -it -v file.json:/file.json test
total 80K
drwxr-xr-x 9 root root 4.0K Dec 7 12:58 .
drwxr-xr-x 63 root root 4.0K Dec 7 12:58 ..
drwxr-xr-x 2 root root 4.0K Dec 4 16:10 file.json
My Dockerfile looks like this:
FROM ubuntu:14.04
MAINTAINER Tom
CMD ["ls", "-lah", "/test"]
Docker version is 1.9.1, build a34a1d5.
Is this a documentation issue, a misunderstanding on my side, or is there something else going on?
Maybe that's clear in the answers above... but it took me some time to figure it out in my case.
The underlying reason causing the file being shared with -v to appear as a directory instead of a file is that Docker could not find the file on the host. So Docker creates a new directory in the container with the name being the name of the non existing file on the host as docker thinks that the user just want to share a volume/directory that will be created in the future.
So in the problem reported above, if you used a relative directory in the -v command and docker does not understand relative directories, that means that the file was not found on the host and so docker created a directory. And the answer above which suggests to use $(pwd) will be the correct solution when the problem is due to a relative directory.
But for those reading this page who are not using a relative directory and are having the same problem... then try to understand why the file is missing on the host.
It could just be a stupid typo...
It could be that you're running the "docker run" command from a client which spawns the docker container on a different host and the file being shared does not exist on that different host. The file being shared with -v must exist on the host where the docker agent will spawn the container... not necessarily on the client where the "docker run -v ..." command is executed (although they will be the same in many cases).
There are other possible explanations above for Mac and Windows... that could be it too.
So the file missing from the host is the problem... troubleshoot the problem in your setup... using $(pwd) could be the solution but not always.
test is the name of your image that you have built with 'docker build -t test', not a /test folder.
Try a Dockerfile with:
CMD ["ls", "-lah", "/"]
or
CMD ["cat", "/file.json"]
And:
docker run --rm -it -v $(pwd)/file.json:/file.json test
Note the use of $(pwd) in order to mount a file with its full absolute path (relative paths are not supported)
By using $(pwd), you will get an absolute path which does exists, and respect the case, as opposed to a file name or path which might not exist.
An non-existing host path would be mounted as a folder in the container.
When running docker inside docker (by mounting /var/run/docker.sock for example), you need to be aware that if you do mounts inside docker, the filepaths that are used are always the one on your host.
So if on your host you do the following mount :
-v /tmp/foobar.txt:/my/path/foobar.txt
you should not do the following mount inside docker :
-v /my/path/foobar.txt:/my/other/path.txt
but instead, use the host filepath, eg :
-v /tmp/foobar.txt:/my/other/path.txt
I spent a bit fighting with and diagnosing this issue running docker on Windows. This could also effect people running on Mac OSX, so I add an answer here for people possibly having an issue in those environments as my search brought me to this place and to add an explanation of what appears to be happening in docker.
In Windows or Mac OSX your docker is actually running in a boot2docker VM and only the users directory is actually shared by default. On Windows, this user directory is shared as /c/Users/, however in the MinGW shell shipped with Docker Machine, the drive can be accessed as /C or /c, so this can drive you nuts if you forget the docker commands are actually running against the boot2docker VM and your file paths have to exist on the boot2docker VM and be specified in the manner that they exist there because what appears to be occurring in docker is that instead of giving a warning or error that the directory/file does not exist, docker silently creates the specified source as a directory in the boot2docker VM so there is no ready output to indicate that you are doing anything incorrectly.
So, as in the answer above, if your file is mounting as a directory, then check that you are providing an absolute path. For Windows and Mac OSX check that the absolute path that you are mounting exists in your boot2docker VM.
A case where Docker might not find the file, even though you're sure it exists
As edi9999 pointed out, if you tell the docker daemon to mount a file, it won't look into your current container's filesystem, it will look into the filesystem where the daemon is running.
You might have this problem if your docker daemon is running elsewhere for some reason.
❯ docker run --rm -it -v /var/run/docker.sock:/var/run/docker.sock docker
/ # echo "bar" > /foo
/ # docker run --rm -v /foo:/foo ubuntu bash -c 'cat foo'
cat: foo: Is a directory
Docker can't find the /foo file on it's host, so it (helpfully?) creates a directory there so at least you've mounted something.
A Workaround
You can work around this by mounting a host directory into the outer container, and then using that directory for the volume you want to appear in the inner container:
❯ docker run --rm -it -v /var/run/docker.sock:/var/run/docker.sock -v /dev/shm:/dev/shm docker
/ # echo "bar" > /dev/shm/foo
/ # docker run --rm -v /dev/shm/foo:/dev/shm/foo ubuntu bash -c 'cat /dev/shm/foo'
bar
This makes the path /dev/shm/foo refer to the same file in either context, so you can reference the file from the outer container, and the daemon will find it on the the host, which means it will show up as itself in the inner container, rather than as a directory.
For "Docker for Mac/Windows" users, make sure the volume you're trying to mount from your host is part of your "File sharing" preferences:
There is a simple solution for those who use the VirtualBox machine.
By default, the C:/User folder is added. If your project is in C:/projects, add this folder to make it available in VB (with automount).
I had the same problem as being discussed here with Docker on my MacBook and none of the suggestions worked for me. Turns out that the issue was that I did not have permissions to the file I was trying to mount (it was owned by root). I copied it to my user path and changed ownership to myself and then the file mounted as a file and not a directory. I think, if you don't have permissions, Docker interprets this as the file simply not existing and then proceeds to mount it as a new directory
In my case, on MacOS, I had something like this
version: "3.7"
services:
if_django:
build: ./web/
image: if_django
restart: always
container_name: cf_django
command: >
sh -c "python manage.py collectstatic --noinput
&& uwsgi --ini core.uwsgi.ini"
volumes:
- ./logs/core.log:/code/core.log
- uwsgi-data:/tmp/uwsgi/
- web-static:/code/static/
- web-static:/var/www/core/assets/
if_nginx:
build: ./nginx/
image: if_nginx
restart: always
container_name: cf_nginx
volumes:
- uwsgi-data:/tmp/uwsgi/
- web-static:/var/www/core/assets/:ro
- ./logs:/var/log/nginx/
ports:
- 8800:80
depends_on:
- if_django
volumes:
uwsgi-data:
web-static:
And ./logs/core.log file was created as a directory which made the container cf_django fail to start up.
My solution?
mkdir -p logs && touch logs/core.log && docker-compose up --build -d
I will share my experience, I tend to change my passwords a lot. I had to "reset credentials" in Docker Desktop For Windows for files to be mounted correctly (of course, you should make sure that the files are in the shared paths)