Mount seeded volume on linux host - linux

I'd like to create a Database image and seed it with some initial data. This seems to work fine, since I'm able to create a container with a volume managed by docker. However, when I try to mount the volume to a directory on my linux host machine, it is created empty instead of seeded.
After several hours of trying different configurations, I narrowed down the problem to it's core: the content of the folder in the container associated with the volume on the host machine is overwritten upon creation.
Below a simple Dockerfile that creates a folder containing a single file. When the container is started it prints out the content of the mounted folder.
FROM ubuntu:18.04
RUN mkdir /opt/test
RUN touch /opt/test/myFile.txt
VOLUME /opt/test
CMD ["ls", "/opt/test"]
I'm building the image with: docker build -t test .
Volume managed by docker
$ docker run -v volume-test:/opt/test --name test test
myFile.txt
Here I'm getting the expected output. With the volume mounted in the space managed by docker. So the ouput of docker volume inspect volume-test is:
{
"CreatedAt": "2020-05-13T10:09:29+02:00",
"Driver": "local",
"Labels": null,
"Mountpoint": "/var/snap/docker/common/var-lib-docker/volumes/volume-test/_data",
"Name": "volume-test",
"Options": null,
"Scope": "local"
}
Volume mounted on host machine
$ docker run -v $(pwd)/volume:/opt/test --name test test
Where nothing is returned since the folder is empty... However, I can see that the volume is created and it's owned by the user root, even though I'm executing the docker run command as another user.
drwxr-xr-x 2 root root 4096 May 13 10:11 volume
As a last test, I tried to see what happens when I create the folder for the volume beforehand and add some content to it (in my case a file called anotherFile.txt).
When I'm running now the container, I'm getting the following result:
$ docker run -v $(pwd)/volume:/opt/test --name test test
anotherFile.txt
Which let's me come to the conclusion, that the content in the folder of the container is overwritten by the content of the folder on the host machine.
I can verify as well with docker inspect -f '{{ .Mounts }}' test, that the volume is mounted at the right place:
[{bind /pathWhere/pwd/pointedTo/volume /opt/test true rprivate}]
Now my question: Is there a way to have the same behavior for volumes on the host machine as for the volumes managed by docker, where the content of the /opt/test folder in the container is copied into the folder on the host defined as volume?
Sidenote: this seems to be the case when using docker on windows and having the option Shared Folders enabled...
Furthermore, it seems as a similar question was already asked here but no answer was found. I decided to make a separate post since I think this is the most generic example to describe this issue.

Desired situation
Data from within the docker image is placed in a specified path on the host.
Your current situation
When creating the image, data is put into /opt/test
When starting the container, you mount the volume on /opt/test
Problem
Because you mount on the same path as where you have put your data, your data gets overwritten.
Solution
Create a file within image during docker-build, for example touch /opt/test/data/myFile.txt,
Use a different path to mount your volume, so that the data is not overwritten, for example /opt/test/mount
Use 'CMD' to copy the files to the volume, like so: CMD ["cp", "-n" "/opt/test/data/*", "/opt/test/mount/"]
Consulted sources
https://docs.docker.com/develop/develop-images/dockerfile_best-practices/
https://docs.docker.com/storage/volumes/

Related

mounting volume from inside the container in local directory [duplicate]

Assume that i have an application with this simple Dockerfile:
//...
RUN configure.sh --logmyfiles /var/lib/myapp
ENTRYPOINT ["starter.sh"]
CMD ["run"]
EXPOSE 8080
VOLUME ["/var/lib/myapp"]
And I run a container from that:
sudo docker run -d --name myapp -p 8080:8080 myapp:latest
So it works properly and stores some logs in /var/lib/myapp of docker container.
My question
I need these log files to automatically saved in host too, So how can i mount the /var/lib/myapp from the container to the /var/lib/myapp in host server (without removing current container) ?
Edit
I also see Docker - Mount Directory From Container to Host, but it doesn't solve my problem i need a way to backup my files from docker to host.
First, a little information about Docker volumes. Volume mounts occur only at container creation time. That means you cannot change volume mounts after you've started the container. Also, volume mounts are one-way only: From the host to the container, and not vice-versa. When you specify a host directory mounted as a volume in your container (for example something like: docker run -d --name="foo" -v "/path/on/host:/path/on/container" ubuntu), it is a "regular ole" linux mount --bind, which means that the host directory will temporarily "override" the container directory. Nothing is actually deleted or overwritten on the destination directory, but because of the nature of containers, that effectively means it will be overridden for the lifetime of the container.
So, you're left with two options (maybe three). You could mount a host directory into your container and then copy those files in your startup script (or if you bring cron into your container, you could use a cron to periodically copy those files to that host directory volume mount).
You could also use docker cp to move files from your container to your host. Now that is kinda hacky and definitely not something you should use in your infrastructure automation. But it does work very well for that exact purpose. One-off or debugging is a great situation for that.
You could also possibly set up a network transfer, but that's pretty involved for what you're doing. However, if you want to do this regularly for your log files (or whatever), you could look into using something like rsyslog to move those files off your container.
So how can i mount the /var/lib/myapp from the container to the /var/lib/myapp in host server
That is the opposite: you can mount an host folder to your container on docker run.
(without removing current container)
I don't think so.
Right now, you can check docker inspect <containername> and see if you see your log in the /var/lib/docker/volumes/... associated to the volume from your container.
Or you can redirect the result of docker logs <containername> to an host file.
For more example, see this gist.
The alternative would be to mount a host directory as the log folder and then access the log files directly on the host.
me#host~$ docker run -d -p 80:80 -v <sites-enabled-dir>:/etc/nginx/sites-enabled -v <certs-dir>:/etc/nginx/certs -v <log-dir>:/var/log/nginx dockerfile/nginx
me#host~$ ls <log-dir>
(again, that apply to a container that you start, not an existing running one)

Docker mount files to my local system while running the container

I am using an image academind/node-example-1 which is a simple node image. You can check it https://hub.docker.com/r/academind/node-example-1 here. What I want while I run the image I want to get the same folder & file structure that is there in the Image. I know that can be done via volume. When I use like
docker run -d --rm --name node-test -p 5000:80 academind/node-example-1
Everything is proper But I want to get the codebase while running, so I tried like
docker run -d --rm --name node-test -p 5000:80 -v /Users/souravbanerjee/Documents/node-docker:node-example-1 academind/node-example-1
Here node-docker is my local folder where I expect the code to be. It runs but not getting the files in the local machine, I'm in doubt here where the source_path:destination_path. Please correct me to please tell me where I'm wrong, or what to do, or my entire thinking is going in the wrong direction or not.
Thanks.
If you read the official doc, you'll see that the first part of the : should be the path somewhere in the host machine (which you're doing), while the later part should match the path "inside" the container (instead you're using image name). Assuming /app being the path (I've taken that course by myself and this is the path AFAIR), it should be:
docker run -d --rm --name node-test -p 5000:80 -v /Users/souravbanerjee/Documents/node-docker:/app academind/node-example-1
I think the correct syntax is to enter the drive information in the volume mapping.
Eg. Users/xxxx/:some/drive/location
However, that would map your empty drive at xxxx over the top of 'location' folder. Thus deleting the existing files in the container.
If you are interested in seeing the contents of the files in the container, you should consider using 'Docker CP' command.
People often use volume mounts to push data (i.e. persistent database files) into a container.
Alternatively, writing log files to the volume mounted location inside the application container. Then those files are then reflected on your local drive
You can copy the files to the current host directory using the command
docker cp node-test:/app .
when the container is running

How to mount a host directory into a running docker container

I want to mount my usb drive into a running docker instance for manually backup of some files.
I know of the -v feature of docker run, but this creates a new container.
Note: its a nextcloudpi container.
You can only change a very limited set of container options after a container starts up. Options like environment variables and container mounts can only be set during the initial docker run or docker create. If you want to change these, you need to stop and delete your existing container, and create a new one with the new mount option.
If there's data that you think you need to keep or back up, it should live in some sort of volume mount anyways. Delete and restart your container and use a -v option to mount a volume on where the data is kept. The Docker documentation has an example using named volumes with separate backup and restore containers; or you can directly use a host directory and your normal backup solution there. (Deleting and recreating a container as I suggested in the first paragraph is extremely routine, and this shouldn't involve explicit "backup" and "restore" steps.)
If you have data that's there right now that you can't afford to lose, you can docker cp it out of the container before setting up a more robust storage scheme.
As David Maze mentioned, it's almost impossible to change the volume location of an existing container by using normal docker commands.
I found an alternative way that works for me. The main idea is convert the existing container to a new docker image and initialize a new docker container on top of it. Hope works for you too.
# Create a new image from the container
docker commit CONTAINERID NEWIMAGENAME
# Create a new container on the top of the new image
docker run -v HOSTLOCATION:CONTAINERLOCATION NEWIMAGENAME
I know the question is from May, but for future searchers:
Create a mounting point on the host filesystem:
sudo mkdir /mnt/usb-drive
Run the docker container using the --mount option and set the "bind propagation" to "shared":
docker run --name mynextcloudpi -it --mount type=bind,source=/mnt/usb-drive,target=/mnt/disk,bind-propagation=shared nextcloudpi
Now you can mount your USB drive to the /mnt/usb-drive directory and it will be mounted to the /mnt/disk location inside the running container.
E.g: sudo mount /dev/sda1 /mnt/usb-drive
Change the /dev/sda1, of course.
More info about bind-propagation: https://docs.docker.com/storage/bind-mounts/#configure-bind-propagation

Docker -- mounting a volume not behaving like regular mount

I am new to docker so I am certain I am doing something wrong. I am also not a php developer but that shouldn't matter in this case.
I am using a drupal docker image which has data at the /var/www/html directory.
I am attempting to overwrite this data with a drupal site from a local directory on the host system.
According to the docs this is the expected behavior
Mount a host directory as a data volume
In addition to creating a
volume using the -v flag you can also mount a directory from your
Docker engine’s host into a container.
$ docker run -d -P --name web -v /src/webapp:/webapp training/webapp
python app.py
This command mounts the host directory, /src/webapp,
into the container at /webapp. If the path /webapp already exists
inside the container’s image, the /src/webapp mount overlays but does
not remove the pre-existing content. Once the mount is removed, the
content is accessible again. This is consistent with the expected
behavior of the mount command.
However I am finding that the local drupal site files do not exist on the container. My complete workflow is as follows:
docker-compose.yml
drupal:
container_name: empower_drupal
build: ./build/drupal-local-codebase
ports:
- "8888:80"
- "8022:22"
- "443"
#volumes: THIS IS ALSO NOT WORKING
#- /home/sameh/empower-tap:/var/www/html
$ docker-compose up -d
# edit the container by snapshotting it
$ docker commit empower_drupal empower_drupal1
$ docker run -d -P --name empower_drupal2 -v /home/sameh/empower-tap:/var/ww/html empower_drupal1
# snapshot the container to examine it
$ docker commit 9cfeca48efd3 empower_drupal2
$ docker run -t -i empower_drupal2 /bin/bash
The empower_drupal2 container does not have the correct files from the /home/sameh/empower-tap directory.
Why this did not work
Here's what you did, with some annotations.
$ docker-compose up -d
Given your docker-compose.yml, with the volumes section commented out, at this point you have running container, but no volumes mounted.
# edit the container by snapshotting it
$ docker commit empower_drupal empower_drupal1
All you've really done here is made a copy of the image you had already, unless your container makes changes to itself on startup.
$ docker run -d -P --name empower_drupal2 -v /home/sameh/empower-tap:/var/ww/html empower_drupal1
Here you have run your new copy, mounted a volume. Ok, the files are available in this container now.
# snapshot the container to examine it
$ docker commit 9cfeca48efd3 empower_drupal2
I'm assuming here that you wanted to commit the contents of the volume into the image. That will not work. The commit documentation is clear about this point:
The commit operation will not include any data contained in volumes mounted inside the container.
$ docker run -t -i empower_drupal2 /bin/bash
So, as you found, when you run the image generated by commit, but without volume mounts, the files are not there.
Also, it is not clear in your docker-compose.yml example where the volumes: section was before it was commented out. Currently it seems to be on the left margin, which would not work. It would need to be at the same level as build: and ports: in order to work on your drupal service.
What to do instead
That depends on your goal.
Just copy the files from local
If you literally just want to populate the image with the files from your local system, you can do that in Dockerfile.
COPY local-dir/* /var/www/html
You mentioned that this copy can't work because the directory is not local. Unfortunately that cannot be solved easily with something like a symlink. Your best option is to copy the directory to the local context before building. Docker does not plan to change this behavior.
Override contents for development
A common scenario is you want to use your local directory for development, so that changes are reflected right away instead of doing a rebuild. But when not doing development, you want the files baked into the image.
In that case, start by telling Dockerfile to copy the files into the image, as above. That way an image build will contain them, volume mount or no.
Then, when you are doing development, use volumes: in docker-compose.yml, or the -v flag to docker run, to mount a volume. A volume mount will override whatever is baked into the image, so you will be using your local files. When you're done and the code is ready to go, just do an image build and your final files will be baked into the image for deployment.
Use a volume plus a commit
You can also do this in a slightly roundabout way by mounting the volume, copying the contents elswhere, then committing the result.
# start a container with the volume mounted somewhere
docker run -d -v /home/sameh/empower-tap:/var/www/html_temp [...etc...]
# copy the files elsewhere inside the container
docker exec <container-name> cp -r /var/www/html_temp /var/www/html
# commit the result
docker commit empower_drupal empower_drupal1
Then you should have your mounted volume files in the resulting image.

How to mount a directory inside a docker container on Linux host [duplicate]

This question already has answers here:
Mount directory in Container and share with Host
(3 answers)
Closed 4 years ago.
I would like to mount a directory from a docker container to the local filesystem. the directory is a website root and I need to be able to edit it my local machine using any editor.
I know I can run docker run -v local_path:container_path but doing that it is only creating an empty directory inside the container.
How can one mount a directory inside a docker container on linux host?
It is a bit weird but you can use named volumes for that. Despite host mounted volumes, named ones won't be empied. And you can access the dir. See the example:
docker volume create --name data
docker run -rm=true -v data:/etc ubuntu:trusty
docker volume inspect data
[
{
"Name": "data",
"Driver": "local",
"Mountpoint": "/var/lib/docker/volumes/data/_data",
"Labels": {},
"Scope": "local"
}
]
See the mount point?
mkdir ~/data
sudo -s
cp -r /var/lib/docker/volumes/data/_data/* ~/data
echo "Hello World">~/data/hello.txt
docker run --rm=true -v ~/data:/etc ubuntu:trusty cat /etc/fstab #The content is preserved
docker run --rm=true -v ~/data:/etc ubuntu:trusty cat /etc/hello.txt #And your changes too
It is not exactly you were asking for but depends on your needs it works
Regards
If your goal is to provide a ready to go LAMP, you should use the VOLUMES declaration inside the Dockerfile.
VOLUME volume_path_in_container
The problem is that docker will not mount the file cause they were already present in the path you are creating the volume on. You can go as #Grif-fin said in his comment or modify the entry point of the container so he copy the file you want to expose to the volume at the run time.
You have to insert your datas using the build COPY or ADD command in Dockerfile so the base files will be present in the container.
Then create an entrypoint that will copy file from the COPY path to the volume path.
Then run the container using the -v tag and like -v local_path:volume_path_in_container. Like this, you should have the files inside the container mounted on the local. (At least, it was what I add).
Find an exemple here : https://github.com/titouanfreville/Docker/tree/master/ready_to_go_lamp.
It will avoid to have to build every time and you can provide it from a definitive image.
To be nicer, it would be nice to add an user support so you are owner of the mounted files (if you are not root).
Hope it was useful to you.

Resources