Why use a data-only container over a host mount? - linux

I understand the concept of data-only containers
But why would you use a data-only container over a simple host mount given that data-only containers seem to make it harder to find the data.

When you don't want to manage the mount yourself and don't need to find the data frequently. Good example is database containers, where using data-only container provides you with the following conveniences:
No need to even know what are the volumes that you have to create for a mature container, e.g.
docker run --name my-data tutum/mysql:5.5 true
docker run -d --name my --volumes-from my-data tutum/mysql:5.5
Simplified management via docker. You don't have to manually delete the host directory or create a new path when you need to start anew.

Related

How to mount a host directory into a running docker container

I want to mount my usb drive into a running docker instance for manually backup of some files.
I know of the -v feature of docker run, but this creates a new container.
Note: its a nextcloudpi container.
You can only change a very limited set of container options after a container starts up. Options like environment variables and container mounts can only be set during the initial docker run or docker create. If you want to change these, you need to stop and delete your existing container, and create a new one with the new mount option.
If there's data that you think you need to keep or back up, it should live in some sort of volume mount anyways. Delete and restart your container and use a -v option to mount a volume on where the data is kept. The Docker documentation has an example using named volumes with separate backup and restore containers; or you can directly use a host directory and your normal backup solution there. (Deleting and recreating a container as I suggested in the first paragraph is extremely routine, and this shouldn't involve explicit "backup" and "restore" steps.)
If you have data that's there right now that you can't afford to lose, you can docker cp it out of the container before setting up a more robust storage scheme.
As David Maze mentioned, it's almost impossible to change the volume location of an existing container by using normal docker commands.
I found an alternative way that works for me. The main idea is convert the existing container to a new docker image and initialize a new docker container on top of it. Hope works for you too.
# Create a new image from the container
docker commit CONTAINERID NEWIMAGENAME
# Create a new container on the top of the new image
docker run -v HOSTLOCATION:CONTAINERLOCATION NEWIMAGENAME
I know the question is from May, but for future searchers:
Create a mounting point on the host filesystem:
sudo mkdir /mnt/usb-drive
Run the docker container using the --mount option and set the "bind propagation" to "shared":
docker run --name mynextcloudpi -it --mount type=bind,source=/mnt/usb-drive,target=/mnt/disk,bind-propagation=shared nextcloudpi
Now you can mount your USB drive to the /mnt/usb-drive directory and it will be mounted to the /mnt/disk location inside the running container.
E.g: sudo mount /dev/sda1 /mnt/usb-drive
Change the /dev/sda1, of course.
More info about bind-propagation: https://docs.docker.com/storage/bind-mounts/#configure-bind-propagation

Stop VM with MongoDB docker image without losing data

I have installed the official MongoDB docker image in a VM on AWS EC2, and the database has already data on it. If I stop the VM (to save expenses overnight), will I lose all the data contained in the database? How can I make it persistent in those scenarios?
There are multiple options to achieve this but the 2 most common ways are:
Create a directory on your host to mount the data
Create a docker
volume to mount the data
1) Create a data directory on a suitable volume on your host system, e.g. /my/own/datadir. Start your mongo container like this:
$ docker run --name some-mongo -v /my/own/datadir:/data/db -d mongo:tag
The -v /my/own/datadir:/data/db part of the command mounts the /my/own/datadir directory from the underlying host system as /data/db inside the container, where MongoDB by default will write its data files.
Note that users on host systems with SELinux enabled may see issues with this. The current workaround is to assign the relevant SELinux policy type to the new data directory so that the container will be allowed to access it:
$ chcon -Rt svirt_sandbox_file_t /my/own/datadir
The source of this is the official documentation of the image.
2) Another possibility is to use a docker volume.
$ docker volume create my-volume
This will create a docker volume in the folder /var/lib/docker/volumes/my-volume. Now you can start your container with:
docker run --name some-mongo -v my-volume:/data/db -d mongo:tag
All the data will be stored in the my-volume so in the folder /var/lib/docker/my-volume. So even when you delete your container and create a new mongo container linked with this volume your data will be loaded into the new container.
You can also use the --restart=always option when you perform your initial docker run command. This mean that your container automatically will restart after a reboot of your VM. When you've persisted your data too there will be no difference between your DB before or after the reboot.

Sharing a configuration file to multiple docker containers

Suppose I have the following configuration file on my Docker host, and I want multiple Docker containers to be able to access this file.
/opt/shared/config_file.yml
In a typical non-Docker environment I could use symbolic links, such that:
/opt/app1/config_file.yml -> /opt/shared/config_file.yml
/opt/app2/config_file.yml -> /opt/shared/config_file.yml
Now suppose app1 and app2 are dockerized. I want to be able to update config_file.yml in one place and have all consumers (docker containers) pick up this change without requiring the container to be rebuilt.
I understand that symlinks cannot be used to access files on the host machine that are outside of the docker container.
The first two options that come to mind are:
Set up an NFS share from docker host to docker containers
Put the config file in a shared Docker volume, and use docker-compose to connect app1 and app2 to the shared config docker
I am trying to identify other options and then ultimately decide upon the best course of action.
What about host mounted volumes? If each application is only reading the configuration and the requirement is that it lives in different locations within the container you could do something like:
docker run --name app1 --volume /opt/shared/config_file.yml:/opt/app1/config_file.yml:ro app1image
docker run --name app2 --volume /opt/shared/config_file.yml:/opt/app2/config_file.yml:ro app2image
The file on the host can be mounted at a separate location per container. In Docker 1.9 you can actually have arbitrary volumes from specific plugins to hold the data (such as Flocker). However, both of these solutions are still per host and the data isn't available on multiple hosts at the same time.

ownership of files that are written by the docker container to mounted volume

I'm using non-root user on a secured env to run stock DB docker container (elasticsearch). Of course - I want the data to be mounted so I won't lose it when the container is destroyed.
The problem is that this container writes to that volume with root ownership, and then the host doesn't have permissions to move/rm them.
I know that most docker images use root user from inside, but how can I control the file ownership of the hosting machine?
You can create a data container docker create -v /usr/share/elasticsearch/data --name esdata elasticsearch /bin/true, then use it in your container docker run -d --volumes-from esdata --name some-elasticsearch elasticsearch.
This is a prefer data pattern for docker, you can find out more in this docker page.
To answer you question use "docker run --user '$(id -u)' ..." it will run program within container with current user id, then you might have the same question as I did.
I answered it in some way I hope it might be useful.
Docker with '--user' can not write to volume with different ownership

communication between containers in docker

Is there any to way to communicate among docker containers other than via sockets/network? I have read docker documentation which says we can link docker containers using --link option but it doesn't speicify how to transfer data/msg from one container to another. I have already created a container named checkram.
Now I want to link a new container with this container and I run
docker run -i -t --privileged --link=checkram:linkcheck --name linkcont topimg command.
Then i checked env variable LINKCHECK_PORT in linkcont container which contains tcp://172.17.0.14:22.
I don't know what to do with this ip and port and how to communicate with checkram container from linkcont container. can anyone help me out of this? Thanks in advance.
There are several tools you can use to achieve running multiple docker containers and interact with them. docker has a tool: docker Compose where you can build and interact multiple containers.
Another tool that works as well: decking you can also use FIG, but i found decking was very straight forward and easy to configure. At that time when i was using decking, docker compose was not released yet. docker compose is a newer tool, yet it is developed by docker.

Resources