With the below command:
docker container run -dit --name testcontainer –mount source= ubervol, target=/vol alpine:latest
source mount point name is ubervol pointing to target /vol that resides within container as shown below:
user#machine:~$ docker container exec -it b4fd sh
/ # pwd
/
/ # ls vol
vol
ubervol sits outside the container in /var/lib/docker/volumes/ubervol path of host machine(hosting docker daemon)
With the below Dockerfile:
# Create the folders and volume mount points
RUN mkdir -p /var/log/jenkins
RUN chown -R jenkins:jenkins /var/log/jenkins
RUN mkdir -p /var/jenkins_home
RUN chown -R jenkins:jenkins /var/jenkins_home
VOLUME ["/var/log/jenkins", "/var/jenkins_home"]
my understanding is, target is sitting within container with path /var/log/jenkins
&& /var/jenkins_home
What is the source mount point name? for each target(/var/log/jenkins && /var/jenkins_home)
What is the path of this mount point name in host machine?
The location of the volume data on the host is an implementation detail that you shouldn't try to take advantage of. On some environments, like the Docker Desktop for Mac application, the data will be hidden away inside a virtual machine you can't directly access. While I've rarely encountered one there are also alternate volume drivers that would let you store the content somewhere else.
Every time you docker run a container based on an image that declares a VOLUME, if you don't mount something else with a -v option, Docker will create an anonymous volume and mount it there for you (in the same way as if you didn't specify a --mount source=...). If you start multiple containers from the same image, I believe each gets a new volume (with a different host path, if there is one). The Dockerfile cannot control the location of the volume on the host; the operator could mount a different named volume or a host directory instead.
In practice there's almost no point to using VOLUME in a Dockerfile. You can use docker run -v whether or not here's a VOLUME for the same directory. Its principal effect is to prevent future RUN commands from modifying that directory.
Related
Assume that i have an application with this simple Dockerfile:
//...
RUN configure.sh --logmyfiles /var/lib/myapp
ENTRYPOINT ["starter.sh"]
CMD ["run"]
EXPOSE 8080
VOLUME ["/var/lib/myapp"]
And I run a container from that:
sudo docker run -d --name myapp -p 8080:8080 myapp:latest
So it works properly and stores some logs in /var/lib/myapp of docker container.
My question
I need these log files to automatically saved in host too, So how can i mount the /var/lib/myapp from the container to the /var/lib/myapp in host server (without removing current container) ?
Edit
I also see Docker - Mount Directory From Container to Host, but it doesn't solve my problem i need a way to backup my files from docker to host.
First, a little information about Docker volumes. Volume mounts occur only at container creation time. That means you cannot change volume mounts after you've started the container. Also, volume mounts are one-way only: From the host to the container, and not vice-versa. When you specify a host directory mounted as a volume in your container (for example something like: docker run -d --name="foo" -v "/path/on/host:/path/on/container" ubuntu), it is a "regular ole" linux mount --bind, which means that the host directory will temporarily "override" the container directory. Nothing is actually deleted or overwritten on the destination directory, but because of the nature of containers, that effectively means it will be overridden for the lifetime of the container.
So, you're left with two options (maybe three). You could mount a host directory into your container and then copy those files in your startup script (or if you bring cron into your container, you could use a cron to periodically copy those files to that host directory volume mount).
You could also use docker cp to move files from your container to your host. Now that is kinda hacky and definitely not something you should use in your infrastructure automation. But it does work very well for that exact purpose. One-off or debugging is a great situation for that.
You could also possibly set up a network transfer, but that's pretty involved for what you're doing. However, if you want to do this regularly for your log files (or whatever), you could look into using something like rsyslog to move those files off your container.
So how can i mount the /var/lib/myapp from the container to the /var/lib/myapp in host server
That is the opposite: you can mount an host folder to your container on docker run.
(without removing current container)
I don't think so.
Right now, you can check docker inspect <containername> and see if you see your log in the /var/lib/docker/volumes/... associated to the volume from your container.
Or you can redirect the result of docker logs <containername> to an host file.
For more example, see this gist.
The alternative would be to mount a host directory as the log folder and then access the log files directly on the host.
me#host~$ docker run -d -p 80:80 -v <sites-enabled-dir>:/etc/nginx/sites-enabled -v <certs-dir>:/etc/nginx/certs -v <log-dir>:/var/log/nginx dockerfile/nginx
me#host~$ ls <log-dir>
(again, that apply to a container that you start, not an existing running one)
I am new to docker so I am certain I am doing something wrong. I am also not a php developer but that shouldn't matter in this case.
I am using a drupal docker image which has data at the /var/www/html directory.
I am attempting to overwrite this data with a drupal site from a local directory on the host system.
According to the docs this is the expected behavior
Mount a host directory as a data volume
In addition to creating a
volume using the -v flag you can also mount a directory from your
Docker engine’s host into a container.
$ docker run -d -P --name web -v /src/webapp:/webapp training/webapp
python app.py
This command mounts the host directory, /src/webapp,
into the container at /webapp. If the path /webapp already exists
inside the container’s image, the /src/webapp mount overlays but does
not remove the pre-existing content. Once the mount is removed, the
content is accessible again. This is consistent with the expected
behavior of the mount command.
However I am finding that the local drupal site files do not exist on the container. My complete workflow is as follows:
docker-compose.yml
drupal:
container_name: empower_drupal
build: ./build/drupal-local-codebase
ports:
- "8888:80"
- "8022:22"
- "443"
#volumes: THIS IS ALSO NOT WORKING
#- /home/sameh/empower-tap:/var/www/html
$ docker-compose up -d
# edit the container by snapshotting it
$ docker commit empower_drupal empower_drupal1
$ docker run -d -P --name empower_drupal2 -v /home/sameh/empower-tap:/var/ww/html empower_drupal1
# snapshot the container to examine it
$ docker commit 9cfeca48efd3 empower_drupal2
$ docker run -t -i empower_drupal2 /bin/bash
The empower_drupal2 container does not have the correct files from the /home/sameh/empower-tap directory.
Why this did not work
Here's what you did, with some annotations.
$ docker-compose up -d
Given your docker-compose.yml, with the volumes section commented out, at this point you have running container, but no volumes mounted.
# edit the container by snapshotting it
$ docker commit empower_drupal empower_drupal1
All you've really done here is made a copy of the image you had already, unless your container makes changes to itself on startup.
$ docker run -d -P --name empower_drupal2 -v /home/sameh/empower-tap:/var/ww/html empower_drupal1
Here you have run your new copy, mounted a volume. Ok, the files are available in this container now.
# snapshot the container to examine it
$ docker commit 9cfeca48efd3 empower_drupal2
I'm assuming here that you wanted to commit the contents of the volume into the image. That will not work. The commit documentation is clear about this point:
The commit operation will not include any data contained in volumes mounted inside the container.
$ docker run -t -i empower_drupal2 /bin/bash
So, as you found, when you run the image generated by commit, but without volume mounts, the files are not there.
Also, it is not clear in your docker-compose.yml example where the volumes: section was before it was commented out. Currently it seems to be on the left margin, which would not work. It would need to be at the same level as build: and ports: in order to work on your drupal service.
What to do instead
That depends on your goal.
Just copy the files from local
If you literally just want to populate the image with the files from your local system, you can do that in Dockerfile.
COPY local-dir/* /var/www/html
You mentioned that this copy can't work because the directory is not local. Unfortunately that cannot be solved easily with something like a symlink. Your best option is to copy the directory to the local context before building. Docker does not plan to change this behavior.
Override contents for development
A common scenario is you want to use your local directory for development, so that changes are reflected right away instead of doing a rebuild. But when not doing development, you want the files baked into the image.
In that case, start by telling Dockerfile to copy the files into the image, as above. That way an image build will contain them, volume mount or no.
Then, when you are doing development, use volumes: in docker-compose.yml, or the -v flag to docker run, to mount a volume. A volume mount will override whatever is baked into the image, so you will be using your local files. When you're done and the code is ready to go, just do an image build and your final files will be baked into the image for deployment.
Use a volume plus a commit
You can also do this in a slightly roundabout way by mounting the volume, copying the contents elswhere, then committing the result.
# start a container with the volume mounted somewhere
docker run -d -v /home/sameh/empower-tap:/var/www/html_temp [...etc...]
# copy the files elsewhere inside the container
docker exec <container-name> cp -r /var/www/html_temp /var/www/html
# commit the result
docker commit empower_drupal empower_drupal1
Then you should have your mounted volume files in the resulting image.
This question already has answers here:
Mount directory in Container and share with Host
(3 answers)
Closed 4 years ago.
I would like to mount a directory from a docker container to the local filesystem. the directory is a website root and I need to be able to edit it my local machine using any editor.
I know I can run docker run -v local_path:container_path but doing that it is only creating an empty directory inside the container.
How can one mount a directory inside a docker container on linux host?
It is a bit weird but you can use named volumes for that. Despite host mounted volumes, named ones won't be empied. And you can access the dir. See the example:
docker volume create --name data
docker run -rm=true -v data:/etc ubuntu:trusty
docker volume inspect data
[
{
"Name": "data",
"Driver": "local",
"Mountpoint": "/var/lib/docker/volumes/data/_data",
"Labels": {},
"Scope": "local"
}
]
See the mount point?
mkdir ~/data
sudo -s
cp -r /var/lib/docker/volumes/data/_data/* ~/data
echo "Hello World">~/data/hello.txt
docker run --rm=true -v ~/data:/etc ubuntu:trusty cat /etc/fstab #The content is preserved
docker run --rm=true -v ~/data:/etc ubuntu:trusty cat /etc/hello.txt #And your changes too
It is not exactly you were asking for but depends on your needs it works
Regards
If your goal is to provide a ready to go LAMP, you should use the VOLUMES declaration inside the Dockerfile.
VOLUME volume_path_in_container
The problem is that docker will not mount the file cause they were already present in the path you are creating the volume on. You can go as #Grif-fin said in his comment or modify the entry point of the container so he copy the file you want to expose to the volume at the run time.
You have to insert your datas using the build COPY or ADD command in Dockerfile so the base files will be present in the container.
Then create an entrypoint that will copy file from the COPY path to the volume path.
Then run the container using the -v tag and like -v local_path:volume_path_in_container. Like this, you should have the files inside the container mounted on the local. (At least, it was what I add).
Find an exemple here : https://github.com/titouanfreville/Docker/tree/master/ready_to_go_lamp.
It will avoid to have to build every time and you can provide it from a definitive image.
To be nicer, it would be nice to add an user support so you are owner of the mounted files (if you are not root).
Hope it was useful to you.
I am running my application in a Docker container as a non-root user. I did this since it is one of the best practices. However, while running the container I mount a host volume to it -v /some/folder:/some/folder . I am doing this because my application running inside the docker container needs to write files to the mounted host folder. But since I am running my application as a non-root user, it doesn't have permission to write to that folder
Question
Is it possible to give a nonroot user in a docker container access to the hosted volume?
If not, is my only option to run the process in docker container as root?
There's no magic solution here: permissions inside docker are managed the same as permissions without docker. You need to run the appropriate chown and chmod commands to change the permissions of the directory.
One solution is to have your container run as root and use an ENTRYPOINT script to make the appropriate permission changes, and then your CMD as an unprivileged user. For example, put the following in entrypoint.sh:
#!/bin/sh
chown -R appuser:appgroup /path/to/volume
exec runuser -u appuser "$#"
This assumes you have the runuser command available. You can accomplish pretty much the same thing using sudo instead.
Use the above script by including an ENTRYPOINT directive in your Dockerfile:
FROM baseimage
COPY entrypoint.sh /entrypoint.sh
ENTRYPOINT ["/bin/sh", "entrypoint.sh"]
CMD ["/usr/bin/myapp"]
This will start the container with:
/bin/sh entrypoint.sh /usr/bin/myapp
The entrypoint script will make the required permissions changes, then run /usr/bin/myapp as appuser.
There will throw error if host env don't have appuser or appgroup, so better to use a User ID instead of user name:
inside your container, run
appuser$ id
This will show:
uid=1000(appuser) gid=1000(appuser) groups=1000(appuser)
From host env, run:
mkdir -p /some/folder
chown -R 1000:1000 /some/folder
docker run -v /some/folder:/some/folder [your_container]
inside your container, check
ls -lh
to see the user and group name, if it's not root, then it's should worked.
In the specific situation of using an image built from a custom Dockerfile, you can do the following (using example commands for a debian image):
FROM baseimage
...
RUN useradd --create-home appuser
USER appuser
RUN mkdir /home/appuser/my_volume
...
Then mount the volume using
-v /some/folder:/home/appuser/my_volume
Now appuser has write permissions to the volume as it's in their home directory. If the volume has to be mounted outside of their home directory, you can create it and assign appuser write permissions as an extra step within the Dockerfile.
I found it easiest to recursively apply Linux ACL (Access Control Lists) permissions on the host directory so the non root host user can access volume contents.
sudo setfacl -m u:$(id -u):rwx -R /some/folder
To check who has access to the folder:
getfacl /some/folder
Writing to the volume will create files and directories with host user id which might not be desirable for host -> container transfer. Writing can be disabled with just giving :rx permission instead of :rwx.
To enable writing, add a mirror ACL policy in a container allowing container user id full access to volume parent path.
I use following command to run a docker container, and map a directory from host(/root/database) to container(/tmp/install/database):
# docker run -it --name oracle_install -v /root/database:/tmp/install/database bofm/oracle12c:preinstall bash
But in container, I find I can't use ls to list contents in /tmp/install/database/ though I am root and have all privileges:
[root#77eb235aceac /]# cd /tmp/install/database/
[root#77eb235aceac database]# ls
ls: cannot open directory .: Permission denied
[root#77eb235aceac database]# id
uid=0(root) gid=0(root) groups=0(root)
[root#77eb235aceac database]# cd ..
[root#77eb235aceac install]# ls -alt
......
drwxr-xr-x. 7 root root 4096 Jul 7 2014 database
I check /root/database in host, and all things seem OK:
[root#localhost ~]# ls -lt
......
drwxr-xr-x. 7 root root 4096 Jul 7 2014 database
Why does docker container prompt "Permission denied"?
Update:
The root cause is related to SELinux. Actually, I met similar issue last year.
A permission denied within a container for a shared directory could be due to the fact that this shared directory is stored on a device. By default containers cannot access any devices. Adding the option $docker run --privileged allows the container to access all devices and performs Kernel calls. This is not considered as secure.
A cleaner way to share device is to use the option docker run --device=/dev/sdb (if /dev/sdb is the device you want to share).
From the man page:
--device=[]
Add a host device to the container (e.g. --device=/dev/sdc:/dev/xvdc:rwm)
--privileged=true|false
Give extended privileges to this container. The default is false.
By default, Docker containers are “unprivileged” (=false) and cannot, for example, run a Docker daemon inside the Docker container. This is because by default a container is not allowed to access any devices. A “privileged” container is given access to all devices.
When the operator executes docker run --privileged, Docker will enable access to all devices on the host as well as set some configuration in AppArmor to allow the container nearly all the same access to the host as processes running outside of a container on the host.
I had a similar issue when sharing an nfs mount point as a volume using docker-compose. I was able to resolve the issue with:
docker-compose up --force-recreate
Eventhough you found the issue, this may help someone else.
Another reason is a mismatch with the UID/GID. This often shows up as being able to modify a mount as root but not as the containers user
You can set the UID, so for an ubuntu container running as ubuntu you may need to append :uid=1000 (check with id -u) or set the UID locally depending on your use case.
uid=value and gid=value
Set the owner and group of the files in the filesystem (default: uid=gid=0)
There is a good blog about it here with this tmpfs example
docker run \
--rm \
--read-only \
--tmpfs=/var/run/prosody:uid=100 \
-it learning/tmpfs
http://www.dendeer.com/post/docker-tmpfs/
I got answer from a comment under: Why does docker container prompt Permission denied?
man docker-run gives the proper answer:
Labeling systems like SELinux require that proper labels are placed on volume content mounted into a container. Without a label, the security system might prevent the processes running
inside the container from using the content. By default, Docker does not change the labels set by the OS.
To change a label in the container context, you can add either of two suffixes :z or :Z to the volume mount. These suffixes tell Docker to relabel file objects on the shared volumes. The z option tells Docker that two containers share the volume content. As a result, Docker labels the content with a shared content label. Shared volume labels allow all containers to
read/write content. The Z option tells Docker to label the content with a private unshared label. Only the current container can use a private volume.
For example:
docker run -it --name oracle_install -v /root/database:/tmp/install/database:z ...
So I was trying to run a C file using Python os.system in the container but the I was getting the same error my fix was while creating the image add this line RUN chmod -R 777 app it worked for me