Dockerfile: run commands in volume - linux

I am building an image based on an existing image from the docker hub. In this official image, there is a volume declared (the data directory of a database).
I want to add files to this directory (to initialize a database).
However, after every command, the content of this directory seems to be disappeared.
How can I apply changes or create files in this volume directory?

You can mount the VOLUME declared in the image with the host.
In this way whatever you will write to the volume will be stored on host.
you can mount your volume in this way
docker run -v /foo:/bar example/example command

Related

mounting volume from inside the container in local directory [duplicate]

Assume that i have an application with this simple Dockerfile:
//...
RUN configure.sh --logmyfiles /var/lib/myapp
ENTRYPOINT ["starter.sh"]
CMD ["run"]
EXPOSE 8080
VOLUME ["/var/lib/myapp"]
And I run a container from that:
sudo docker run -d --name myapp -p 8080:8080 myapp:latest
So it works properly and stores some logs in /var/lib/myapp of docker container.
My question
I need these log files to automatically saved in host too, So how can i mount the /var/lib/myapp from the container to the /var/lib/myapp in host server (without removing current container) ?
Edit
I also see Docker - Mount Directory From Container to Host, but it doesn't solve my problem i need a way to backup my files from docker to host.
First, a little information about Docker volumes. Volume mounts occur only at container creation time. That means you cannot change volume mounts after you've started the container. Also, volume mounts are one-way only: From the host to the container, and not vice-versa. When you specify a host directory mounted as a volume in your container (for example something like: docker run -d --name="foo" -v "/path/on/host:/path/on/container" ubuntu), it is a "regular ole" linux mount --bind, which means that the host directory will temporarily "override" the container directory. Nothing is actually deleted or overwritten on the destination directory, but because of the nature of containers, that effectively means it will be overridden for the lifetime of the container.
So, you're left with two options (maybe three). You could mount a host directory into your container and then copy those files in your startup script (or if you bring cron into your container, you could use a cron to periodically copy those files to that host directory volume mount).
You could also use docker cp to move files from your container to your host. Now that is kinda hacky and definitely not something you should use in your infrastructure automation. But it does work very well for that exact purpose. One-off or debugging is a great situation for that.
You could also possibly set up a network transfer, but that's pretty involved for what you're doing. However, if you want to do this regularly for your log files (or whatever), you could look into using something like rsyslog to move those files off your container.
So how can i mount the /var/lib/myapp from the container to the /var/lib/myapp in host server
That is the opposite: you can mount an host folder to your container on docker run.
(without removing current container)
I don't think so.
Right now, you can check docker inspect <containername> and see if you see your log in the /var/lib/docker/volumes/... associated to the volume from your container.
Or you can redirect the result of docker logs <containername> to an host file.
For more example, see this gist.
The alternative would be to mount a host directory as the log folder and then access the log files directly on the host.
me#host~$ docker run -d -p 80:80 -v <sites-enabled-dir>:/etc/nginx/sites-enabled -v <certs-dir>:/etc/nginx/certs -v <log-dir>:/var/log/nginx dockerfile/nginx
me#host~$ ls <log-dir>
(again, that apply to a container that you start, not an existing running one)

Mount seeded volume on linux host

I'd like to create a Database image and seed it with some initial data. This seems to work fine, since I'm able to create a container with a volume managed by docker. However, when I try to mount the volume to a directory on my linux host machine, it is created empty instead of seeded.
After several hours of trying different configurations, I narrowed down the problem to it's core: the content of the folder in the container associated with the volume on the host machine is overwritten upon creation.
Below a simple Dockerfile that creates a folder containing a single file. When the container is started it prints out the content of the mounted folder.
FROM ubuntu:18.04
RUN mkdir /opt/test
RUN touch /opt/test/myFile.txt
VOLUME /opt/test
CMD ["ls", "/opt/test"]
I'm building the image with: docker build -t test .
Volume managed by docker
$ docker run -v volume-test:/opt/test --name test test
myFile.txt
Here I'm getting the expected output. With the volume mounted in the space managed by docker. So the ouput of docker volume inspect volume-test is:
{
"CreatedAt": "2020-05-13T10:09:29+02:00",
"Driver": "local",
"Labels": null,
"Mountpoint": "/var/snap/docker/common/var-lib-docker/volumes/volume-test/_data",
"Name": "volume-test",
"Options": null,
"Scope": "local"
}
Volume mounted on host machine
$ docker run -v $(pwd)/volume:/opt/test --name test test
Where nothing is returned since the folder is empty... However, I can see that the volume is created and it's owned by the user root, even though I'm executing the docker run command as another user.
drwxr-xr-x 2 root root 4096 May 13 10:11 volume
As a last test, I tried to see what happens when I create the folder for the volume beforehand and add some content to it (in my case a file called anotherFile.txt).
When I'm running now the container, I'm getting the following result:
$ docker run -v $(pwd)/volume:/opt/test --name test test
anotherFile.txt
Which let's me come to the conclusion, that the content in the folder of the container is overwritten by the content of the folder on the host machine.
I can verify as well with docker inspect -f '{{ .Mounts }}' test, that the volume is mounted at the right place:
[{bind /pathWhere/pwd/pointedTo/volume /opt/test true rprivate}]
Now my question: Is there a way to have the same behavior for volumes on the host machine as for the volumes managed by docker, where the content of the /opt/test folder in the container is copied into the folder on the host defined as volume?
Sidenote: this seems to be the case when using docker on windows and having the option Shared Folders enabled...
Furthermore, it seems as a similar question was already asked here but no answer was found. I decided to make a separate post since I think this is the most generic example to describe this issue.
Desired situation
Data from within the docker image is placed in a specified path on the host.
Your current situation
When creating the image, data is put into /opt/test
When starting the container, you mount the volume on /opt/test
Problem
Because you mount on the same path as where you have put your data, your data gets overwritten.
Solution
Create a file within image during docker-build, for example touch /opt/test/data/myFile.txt,
Use a different path to mount your volume, so that the data is not overwritten, for example /opt/test/mount
Use 'CMD' to copy the files to the volume, like so: CMD ["cp", "-n" "/opt/test/data/*", "/opt/test/mount/"]
Consulted sources
https://docs.docker.com/develop/develop-images/dockerfile_best-practices/
https://docs.docker.com/storage/volumes/

Writing log files in NodeJS Docker container

I want to write log files to the host file system, so it is persisted, even if the Docker container dies.
Do I need to mount a volume in my Docker yaml?
VOLUME /var/log/myApp
Then do I just reference the mount like this?
var stream = fs.createWriteStream(`/var/log/myApp/myLog.log`);
stream.write('Hello World!');
Then outside of my container, I can go to the /var/log/myApp/ directory and see my logs.
I am trying to find an example of this, but haven't seen anything.
When you're setting up your container, you just use the -v argument:
-v ./path/to/local/directory:/var/log/myApp
The first path is where the volume is available on the host system (the period at the beginning means it's relative to where you're running the docker command). The path on the right hand side is where it's available in the container.
Once more, in docker-compose:
volumes:
- "./path/to/local/directory:/var/log/myApp"
And yes, this will allow the data stored in the volume to be persistent.

How to mount a host directory into a running docker container

I want to mount my usb drive into a running docker instance for manually backup of some files.
I know of the -v feature of docker run, but this creates a new container.
Note: its a nextcloudpi container.
You can only change a very limited set of container options after a container starts up. Options like environment variables and container mounts can only be set during the initial docker run or docker create. If you want to change these, you need to stop and delete your existing container, and create a new one with the new mount option.
If there's data that you think you need to keep or back up, it should live in some sort of volume mount anyways. Delete and restart your container and use a -v option to mount a volume on where the data is kept. The Docker documentation has an example using named volumes with separate backup and restore containers; or you can directly use a host directory and your normal backup solution there. (Deleting and recreating a container as I suggested in the first paragraph is extremely routine, and this shouldn't involve explicit "backup" and "restore" steps.)
If you have data that's there right now that you can't afford to lose, you can docker cp it out of the container before setting up a more robust storage scheme.
As David Maze mentioned, it's almost impossible to change the volume location of an existing container by using normal docker commands.
I found an alternative way that works for me. The main idea is convert the existing container to a new docker image and initialize a new docker container on top of it. Hope works for you too.
# Create a new image from the container
docker commit CONTAINERID NEWIMAGENAME
# Create a new container on the top of the new image
docker run -v HOSTLOCATION:CONTAINERLOCATION NEWIMAGENAME
I know the question is from May, but for future searchers:
Create a mounting point on the host filesystem:
sudo mkdir /mnt/usb-drive
Run the docker container using the --mount option and set the "bind propagation" to "shared":
docker run --name mynextcloudpi -it --mount type=bind,source=/mnt/usb-drive,target=/mnt/disk,bind-propagation=shared nextcloudpi
Now you can mount your USB drive to the /mnt/usb-drive directory and it will be mounted to the /mnt/disk location inside the running container.
E.g: sudo mount /dev/sda1 /mnt/usb-drive
Change the /dev/sda1, of course.
More info about bind-propagation: https://docs.docker.com/storage/bind-mounts/#configure-bind-propagation

Nvidia-docker add folder to container

I'm pretty new to docker containers. I understand there are ADD and COPY operations so a container can see files. How does one give the container access to a given directory where I can put my datasets?
Let's say I have a /home/username/dataset directory how do I make it at /dataset or something in the docker container so I can reference it?
Is there a way for the container to reference a directory on the main system so you don't have to have duplicate files. Some of these datasets will be quite large and while I can delete the original after copying it over .. that's just annoying if I want to do something outside the docker container with the files.
You cannot do that during the build time. If you want to do it during build time then you need to copy it into the context
Or else when you run the container you need to do a volume bind mount
docker run -it -v /home/username/dataset:/dataset <image>
Directories on host can be mapped to directories inside container.
If you are using docker run to start your container, then you can include -v flag to include volumes.
docker run --rm -v "/home/username/dataset:/dataset" <image_name>
If you are using a compose file, you may include volumes using:
volumes:
- /home/<username>/dataset:/dataset
For a detailed description of how to use volumes, you may visit Use volumes in docker

Resources