Suppose I have the following configuration file on my Docker host, and I want multiple Docker containers to be able to access this file.
/opt/shared/config_file.yml
In a typical non-Docker environment I could use symbolic links, such that:
/opt/app1/config_file.yml -> /opt/shared/config_file.yml
/opt/app2/config_file.yml -> /opt/shared/config_file.yml
Now suppose app1 and app2 are dockerized. I want to be able to update config_file.yml in one place and have all consumers (docker containers) pick up this change without requiring the container to be rebuilt.
I understand that symlinks cannot be used to access files on the host machine that are outside of the docker container.
The first two options that come to mind are:
Set up an NFS share from docker host to docker containers
Put the config file in a shared Docker volume, and use docker-compose to connect app1 and app2 to the shared config docker
I am trying to identify other options and then ultimately decide upon the best course of action.
What about host mounted volumes? If each application is only reading the configuration and the requirement is that it lives in different locations within the container you could do something like:
docker run --name app1 --volume /opt/shared/config_file.yml:/opt/app1/config_file.yml:ro app1image
docker run --name app2 --volume /opt/shared/config_file.yml:/opt/app2/config_file.yml:ro app2image
The file on the host can be mounted at a separate location per container. In Docker 1.9 you can actually have arbitrary volumes from specific plugins to hold the data (such as Flocker). However, both of these solutions are still per host and the data isn't available on multiple hosts at the same time.
Related
Assume that i have an application with this simple Dockerfile:
//...
RUN configure.sh --logmyfiles /var/lib/myapp
ENTRYPOINT ["starter.sh"]
CMD ["run"]
EXPOSE 8080
VOLUME ["/var/lib/myapp"]
And I run a container from that:
sudo docker run -d --name myapp -p 8080:8080 myapp:latest
So it works properly and stores some logs in /var/lib/myapp of docker container.
My question
I need these log files to automatically saved in host too, So how can i mount the /var/lib/myapp from the container to the /var/lib/myapp in host server (without removing current container) ?
Edit
I also see Docker - Mount Directory From Container to Host, but it doesn't solve my problem i need a way to backup my files from docker to host.
First, a little information about Docker volumes. Volume mounts occur only at container creation time. That means you cannot change volume mounts after you've started the container. Also, volume mounts are one-way only: From the host to the container, and not vice-versa. When you specify a host directory mounted as a volume in your container (for example something like: docker run -d --name="foo" -v "/path/on/host:/path/on/container" ubuntu), it is a "regular ole" linux mount --bind, which means that the host directory will temporarily "override" the container directory. Nothing is actually deleted or overwritten on the destination directory, but because of the nature of containers, that effectively means it will be overridden for the lifetime of the container.
So, you're left with two options (maybe three). You could mount a host directory into your container and then copy those files in your startup script (or if you bring cron into your container, you could use a cron to periodically copy those files to that host directory volume mount).
You could also use docker cp to move files from your container to your host. Now that is kinda hacky and definitely not something you should use in your infrastructure automation. But it does work very well for that exact purpose. One-off or debugging is a great situation for that.
You could also possibly set up a network transfer, but that's pretty involved for what you're doing. However, if you want to do this regularly for your log files (or whatever), you could look into using something like rsyslog to move those files off your container.
So how can i mount the /var/lib/myapp from the container to the /var/lib/myapp in host server
That is the opposite: you can mount an host folder to your container on docker run.
(without removing current container)
I don't think so.
Right now, you can check docker inspect <containername> and see if you see your log in the /var/lib/docker/volumes/... associated to the volume from your container.
Or you can redirect the result of docker logs <containername> to an host file.
For more example, see this gist.
The alternative would be to mount a host directory as the log folder and then access the log files directly on the host.
me#host~$ docker run -d -p 80:80 -v <sites-enabled-dir>:/etc/nginx/sites-enabled -v <certs-dir>:/etc/nginx/certs -v <log-dir>:/var/log/nginx dockerfile/nginx
me#host~$ ls <log-dir>
(again, that apply to a container that you start, not an existing running one)
I have a running docker container with some service running inside it. Using that service, I want to pull a file from the host into the container.
docker cp won't work because that command is run from the host. I
want to trigger the copy from the container
mounting host filesystem paths into the container is not possible without stopping the container. I cannot stop the container. I can, however, install other things inside this Ubuntu container
I am not sure scp is an option since I don't have the login/password/keys to the host from the running container
Is it even possible to pull/copy a file into a container from a service running inside the container? What are my possibilities here? ftp? telnet? What are my options?
Thanks
I don't think you have many options. An idea is that if:
the host has a web server (or FTP server) up and running
and the file is located in the appropriate directory (so that it can be served)
maybe you can use wget or curl to get the file. Keep in mind that you might need credentials though...
IMHO, if what you are asking for is doable, it is a security hole.
Pass the host path as a parameter to your docker container, customize the docker image to read the file from the path(read above in parameter) and use the file as required.
You could validate the same in docker entry point script.
I need to start, stop and restart containers from inside another container.
For Example:
Container A -> start Container B
Container A -> stop Container C
My Dockerfile:
FROM node:7.2.0-slim
WORKDIR /docker
COPY . /docker
CMD [ "npm", "start" ]
Docker Version 1.12.3
I want to avoid using a ssh connection. Any Ideas?
Per se a container runs in an isolated environment (e.g. with its own file system or network stack) and thus has no direct way to interact with the host it is running on. This is of course intended that way to allow for real isolation.
But there is a way to run containers with some more privileges. To talk to the docker daemon on the host, you can for example mount the docker socket of the host system into the container. This works the same way as you probably would mount some host folder into the container.
docker run -v /var/run/docker.sock:/var/run/docker.sock yourimage
For an example, please see the docker-compose file of the traefik proxy which is a process that listenes for starting and stopping containers on the host to activate some proxy routes to them. You can find the example in the traefik proxy repository.
To be able to talk to the docker daemon on the host, you then also need to have a docker client installed in the container or use some docker api for your programming language. There is an official list of such libraries for different programming languages in the docker docs.
Of course you should be aware of what privileges you give to the container. Someone who manages to exploit your application could possibly shut down your other containers or - even worse - start own containers on your system which can easily be used to gain control over your system. Keep that in mind when you build your application.
I understand the concept of data-only containers
But why would you use a data-only container over a simple host mount given that data-only containers seem to make it harder to find the data.
When you don't want to manage the mount yourself and don't need to find the data frequently. Good example is database containers, where using data-only container provides you with the following conveniences:
No need to even know what are the volumes that you have to create for a mature container, e.g.
docker run --name my-data tutum/mysql:5.5 true
docker run -d --name my --volumes-from my-data tutum/mysql:5.5
Simplified management via docker. You don't have to manually delete the host directory or create a new path when you need to start anew.
If I launch a docker container with
docker run -v /foo:/foo ...
I can see the contents of /foo on the host, from within the container.
While the docker container is running, if I run
mount -t ext4 /dev/... /foo/something
I will NOT see the new mount point in /foo inside the container - is there any way to make it show up? (if I launch the docker container AFTER the mount point on the host is established, it is ok).
Docker containers run in a private mount namespace, which means that mounts made on the host after the container starts do not propagate into the container. The kernel documentation on shared subtrees goes into detail about mount propagation and private vs shared vs slave mounts.
The short answer to your question is that there isn't an easy way to expose a new mount like this into a container. It's possible, probably involving the use of nsenter to run commands inside the container namespace to change the flags on the mounts, but I wouldn't go there.
In general, if you need to change the storage configuration of a container, you re-deploy the container.