I want to write log files to the host file system, so it is persisted, even if the Docker container dies.
Do I need to mount a volume in my Docker yaml?
VOLUME /var/log/myApp
Then do I just reference the mount like this?
var stream = fs.createWriteStream(`/var/log/myApp/myLog.log`);
stream.write('Hello World!');
Then outside of my container, I can go to the /var/log/myApp/ directory and see my logs.
I am trying to find an example of this, but haven't seen anything.
When you're setting up your container, you just use the -v argument:
-v ./path/to/local/directory:/var/log/myApp
The first path is where the volume is available on the host system (the period at the beginning means it's relative to where you're running the docker command). The path on the right hand side is where it's available in the container.
Once more, in docker-compose:
volumes:
- "./path/to/local/directory:/var/log/myApp"
And yes, this will allow the data stored in the volume to be persistent.
Related
Assume that i have an application with this simple Dockerfile:
//...
RUN configure.sh --logmyfiles /var/lib/myapp
ENTRYPOINT ["starter.sh"]
CMD ["run"]
EXPOSE 8080
VOLUME ["/var/lib/myapp"]
And I run a container from that:
sudo docker run -d --name myapp -p 8080:8080 myapp:latest
So it works properly and stores some logs in /var/lib/myapp of docker container.
My question
I need these log files to automatically saved in host too, So how can i mount the /var/lib/myapp from the container to the /var/lib/myapp in host server (without removing current container) ?
Edit
I also see Docker - Mount Directory From Container to Host, but it doesn't solve my problem i need a way to backup my files from docker to host.
First, a little information about Docker volumes. Volume mounts occur only at container creation time. That means you cannot change volume mounts after you've started the container. Also, volume mounts are one-way only: From the host to the container, and not vice-versa. When you specify a host directory mounted as a volume in your container (for example something like: docker run -d --name="foo" -v "/path/on/host:/path/on/container" ubuntu), it is a "regular ole" linux mount --bind, which means that the host directory will temporarily "override" the container directory. Nothing is actually deleted or overwritten on the destination directory, but because of the nature of containers, that effectively means it will be overridden for the lifetime of the container.
So, you're left with two options (maybe three). You could mount a host directory into your container and then copy those files in your startup script (or if you bring cron into your container, you could use a cron to periodically copy those files to that host directory volume mount).
You could also use docker cp to move files from your container to your host. Now that is kinda hacky and definitely not something you should use in your infrastructure automation. But it does work very well for that exact purpose. One-off or debugging is a great situation for that.
You could also possibly set up a network transfer, but that's pretty involved for what you're doing. However, if you want to do this regularly for your log files (or whatever), you could look into using something like rsyslog to move those files off your container.
So how can i mount the /var/lib/myapp from the container to the /var/lib/myapp in host server
That is the opposite: you can mount an host folder to your container on docker run.
(without removing current container)
I don't think so.
Right now, you can check docker inspect <containername> and see if you see your log in the /var/lib/docker/volumes/... associated to the volume from your container.
Or you can redirect the result of docker logs <containername> to an host file.
For more example, see this gist.
The alternative would be to mount a host directory as the log folder and then access the log files directly on the host.
me#host~$ docker run -d -p 80:80 -v <sites-enabled-dir>:/etc/nginx/sites-enabled -v <certs-dir>:/etc/nginx/certs -v <log-dir>:/var/log/nginx dockerfile/nginx
me#host~$ ls <log-dir>
(again, that apply to a container that you start, not an existing running one)
I want to start the following docker container and have terminal access to it:
docker run -it docker:5000/builds/build-lnx64-centos7:latest /bin/bash
The problem is that inside the terminal I can not find any of the files in my file system. No ~/Desktop and similar directories.
Question: how to access the file system of my local PC from within the docker container?
By default, containers cannot see the file system of their host.
If you want to achieve this, you will have to explicitly "mount" whatever directories you want to see using the -v flag, like this:
docker run -v ~/Desktop:/host-desktop -it docker:5000/builds/build-lnx64-centos7:latest /bin/bash
If you run that command, you will see the contents of your desktop in the container's file system, at /host-desktop.
You really would not want your container's to be able to see the entire host file system. That would be dangerous, especially if the container has write permission. You should always only "mount" the exact files/directories you want the container to access.
For the most part, any project I have worked on that uses docker does "volume mounting" so that the container can write files and the developer can easily access them on the host (e.g. selenium tests taking screenshots) or so the developer can edit source code and the container will see the update and hot-reload (e.g. nodejs development). When doing the latter (hot-reload example), it is usually wise to mount in read-only mode.
See the docs for more details: https://docs.docker.com/engine/reference/commandline/run/#mount-volume--v---read-only
I have a running docker container with some service running inside it. Using that service, I want to pull a file from the host into the container.
docker cp won't work because that command is run from the host. I
want to trigger the copy from the container
mounting host filesystem paths into the container is not possible without stopping the container. I cannot stop the container. I can, however, install other things inside this Ubuntu container
I am not sure scp is an option since I don't have the login/password/keys to the host from the running container
Is it even possible to pull/copy a file into a container from a service running inside the container? What are my possibilities here? ftp? telnet? What are my options?
Thanks
I don't think you have many options. An idea is that if:
the host has a web server (or FTP server) up and running
and the file is located in the appropriate directory (so that it can be served)
maybe you can use wget or curl to get the file. Keep in mind that you might need credentials though...
IMHO, if what you are asking for is doable, it is a security hole.
Pass the host path as a parameter to your docker container, customize the docker image to read the file from the path(read above in parameter) and use the file as required.
You could validate the same in docker entry point script.
I am building an image based on an existing image from the docker hub. In this official image, there is a volume declared (the data directory of a database).
I want to add files to this directory (to initialize a database).
However, after every command, the content of this directory seems to be disappeared.
How can I apply changes or create files in this volume directory?
You can mount the VOLUME declared in the image with the host.
In this way whatever you will write to the volume will be stored on host.
you can mount your volume in this way
docker run -v /foo:/bar example/example command
If I launch a docker container with
docker run -v /foo:/foo ...
I can see the contents of /foo on the host, from within the container.
While the docker container is running, if I run
mount -t ext4 /dev/... /foo/something
I will NOT see the new mount point in /foo inside the container - is there any way to make it show up? (if I launch the docker container AFTER the mount point on the host is established, it is ok).
Docker containers run in a private mount namespace, which means that mounts made on the host after the container starts do not propagate into the container. The kernel documentation on shared subtrees goes into detail about mount propagation and private vs shared vs slave mounts.
The short answer to your question is that there isn't an easy way to expose a new mount like this into a container. It's possible, probably involving the use of nsenter to run commands inside the container namespace to change the flags on the mounts, but I wouldn't go there.
In general, if you need to change the storage configuration of a container, you re-deploy the container.