Single file volume mounted as directory in Docker - linux

Docker documentation says that it's possible to mount a single file into a Docker container:
The -v flag can also be used to mount a single file - instead of just directories - from the host machine.
$ docker run --rm -it -v ~/.bash_history:/.bash_history ubuntu /bin/bash
This will drop you into a bash shell in a new container, you will have your bash history from the host and when you exit the container, the host will have the history of the commands typed while in the container.
When I try that however the file mounts as a directory:
tom#u ~/project $ docker run --rm -it -v file.json:/file.json test
total 80K
drwxr-xr-x 9 root root 4.0K Dec 7 12:58 .
drwxr-xr-x 63 root root 4.0K Dec 7 12:58 ..
drwxr-xr-x 2 root root 4.0K Dec 4 16:10 file.json
My Dockerfile looks like this:
FROM ubuntu:14.04
MAINTAINER Tom
CMD ["ls", "-lah", "/test"]
Docker version is 1.9.1, build a34a1d5.
Is this a documentation issue, a misunderstanding on my side, or is there something else going on?

Maybe that's clear in the answers above... but it took me some time to figure it out in my case.
The underlying reason causing the file being shared with -v to appear as a directory instead of a file is that Docker could not find the file on the host. So Docker creates a new directory in the container with the name being the name of the non existing file on the host as docker thinks that the user just want to share a volume/directory that will be created in the future.
So in the problem reported above, if you used a relative directory in the -v command and docker does not understand relative directories, that means that the file was not found on the host and so docker created a directory. And the answer above which suggests to use $(pwd) will be the correct solution when the problem is due to a relative directory.
But for those reading this page who are not using a relative directory and are having the same problem... then try to understand why the file is missing on the host.
It could just be a stupid typo...
It could be that you're running the "docker run" command from a client which spawns the docker container on a different host and the file being shared does not exist on that different host. The file being shared with -v must exist on the host where the docker agent will spawn the container... not necessarily on the client where the "docker run -v ..." command is executed (although they will be the same in many cases).
There are other possible explanations above for Mac and Windows... that could be it too.
So the file missing from the host is the problem... troubleshoot the problem in your setup... using $(pwd) could be the solution but not always.

test is the name of your image that you have built with 'docker build -t test', not a /test folder.
Try a Dockerfile with:
CMD ["ls", "-lah", "/"]
or
CMD ["cat", "/file.json"]
And:
docker run --rm -it -v $(pwd)/file.json:/file.json test
Note the use of $(pwd) in order to mount a file with its full absolute path (relative paths are not supported)
By using $(pwd), you will get an absolute path which does exists, and respect the case, as opposed to a file name or path which might not exist.
An non-existing host path would be mounted as a folder in the container.

When running docker inside docker (by mounting /var/run/docker.sock for example), you need to be aware that if you do mounts inside docker, the filepaths that are used are always the one on your host.
So if on your host you do the following mount :
-v /tmp/foobar.txt:/my/path/foobar.txt
you should not do the following mount inside docker :
-v /my/path/foobar.txt:/my/other/path.txt
but instead, use the host filepath, eg :
-v /tmp/foobar.txt:/my/other/path.txt

I spent a bit fighting with and diagnosing this issue running docker on Windows. This could also effect people running on Mac OSX, so I add an answer here for people possibly having an issue in those environments as my search brought me to this place and to add an explanation of what appears to be happening in docker.
In Windows or Mac OSX your docker is actually running in a boot2docker VM and only the users directory is actually shared by default. On Windows, this user directory is shared as /c/Users/, however in the MinGW shell shipped with Docker Machine, the drive can be accessed as /C or /c, so this can drive you nuts if you forget the docker commands are actually running against the boot2docker VM and your file paths have to exist on the boot2docker VM and be specified in the manner that they exist there because what appears to be occurring in docker is that instead of giving a warning or error that the directory/file does not exist, docker silently creates the specified source as a directory in the boot2docker VM so there is no ready output to indicate that you are doing anything incorrectly.
So, as in the answer above, if your file is mounting as a directory, then check that you are providing an absolute path. For Windows and Mac OSX check that the absolute path that you are mounting exists in your boot2docker VM.

A case where Docker might not find the file, even though you're sure it exists
As edi9999 pointed out, if you tell the docker daemon to mount a file, it won't look into your current container's filesystem, it will look into the filesystem where the daemon is running.
You might have this problem if your docker daemon is running elsewhere for some reason.
❯ docker run --rm -it -v /var/run/docker.sock:/var/run/docker.sock docker
/ # echo "bar" > /foo
/ # docker run --rm -v /foo:/foo ubuntu bash -c 'cat foo'
cat: foo: Is a directory
Docker can't find the /foo file on it's host, so it (helpfully?) creates a directory there so at least you've mounted something.
A Workaround
You can work around this by mounting a host directory into the outer container, and then using that directory for the volume you want to appear in the inner container:
❯ docker run --rm -it -v /var/run/docker.sock:/var/run/docker.sock -v /dev/shm:/dev/shm docker
/ # echo "bar" > /dev/shm/foo
/ # docker run --rm -v /dev/shm/foo:/dev/shm/foo ubuntu bash -c 'cat /dev/shm/foo'
bar
This makes the path /dev/shm/foo refer to the same file in either context, so you can reference the file from the outer container, and the daemon will find it on the the host, which means it will show up as itself in the inner container, rather than as a directory.

For "Docker for Mac/Windows" users, make sure the volume you're trying to mount from your host is part of your "File sharing" preferences:

There is a simple solution for those who use the VirtualBox machine.
By default, the C:/User folder is added. If your project is in C:/projects, add this folder to make it available in VB (with automount).

I had the same problem as being discussed here with Docker on my MacBook and none of the suggestions worked for me. Turns out that the issue was that I did not have permissions to the file I was trying to mount (it was owned by root). I copied it to my user path and changed ownership to myself and then the file mounted as a file and not a directory. I think, if you don't have permissions, Docker interprets this as the file simply not existing and then proceeds to mount it as a new directory

In my case, on MacOS, I had something like this
version: "3.7"
services:
if_django:
build: ./web/
image: if_django
restart: always
container_name: cf_django
command: >
sh -c "python manage.py collectstatic --noinput
&& uwsgi --ini core.uwsgi.ini"
volumes:
- ./logs/core.log:/code/core.log
- uwsgi-data:/tmp/uwsgi/
- web-static:/code/static/
- web-static:/var/www/core/assets/
if_nginx:
build: ./nginx/
image: if_nginx
restart: always
container_name: cf_nginx
volumes:
- uwsgi-data:/tmp/uwsgi/
- web-static:/var/www/core/assets/:ro
- ./logs:/var/log/nginx/
ports:
- 8800:80
depends_on:
- if_django
volumes:
uwsgi-data:
web-static:
And ./logs/core.log file was created as a directory which made the container cf_django fail to start up.
My solution?
mkdir -p logs && touch logs/core.log && docker-compose up --build -d

I will share my experience, I tend to change my passwords a lot. I had to "reset credentials" in Docker Desktop For Windows for files to be mounted correctly (of course, you should make sure that the files are in the shared paths)

Related

Docker: git operations not permitted in dir which is mounted from Win10's to container running ubuntu

Company's PC is Win10 and can't bring self's LP, and I want to develop in Linux, So I'm preparing to install a docker on Windows and run a Linux container on whcih I perform my development.
--- background ---
I've installed a Docker Desktop for Windows(19.03.8) in Win10 and pull a Ubuntu image.
I start the Ubuntu container with -v to mount my win10's host_dir to container's slave_dir.
The host_dir before mount has already been a git repo with a .git directory in host_dir.
Through ssh with root user, I edit the file in slave_dir in container and when I want to commit the changes, the following error appears:
root#5f8d7d02ee70:~/slave_dir# git status
fatal: failed to read object 36fa53e7ecb9d1daa454fc82f7bd7310afa335b7: Operation not permitted
I guess something is wrong with the git Authority between Win10 and my Linux-container
Linux-container's slave_dir:
Win10's host_dir:
And I've got a similar circumstance, in which the blogger said You should run the docker with --user, and the --user's param should be the same with you login on the host
So I tried as follows:
docker run -it --name test --user Leo -p 127.0.0.1:5001 -v host_dir:slave_dir image_name /bin/bash
Unfortunately, the slave_dir's uid and gid are still root.
With cygwin on Win10, I use id to find my login user's uid and gid,
and retry run docker with uid/gid directly.
docker run -it --name test --user 4344296:1049089 -p 127.0.0.1:5001 -v host_dir:slave_dir image_name /bin/bash
OMG, still not work! Still root! ... ...
I'm wondering whether my operation is wrong or window's Docker-Desktop-For-Windows has some tricks with Authority when mounting.
Thanks all!
It looks like a problem with Docker 2.2.0.4. A fix to this problem can be found at this link (It worked for me).
TL;DR: Remove the read-only permission from .git folder in windows.

Docker -- mounting a volume not behaving like regular mount

I am new to docker so I am certain I am doing something wrong. I am also not a php developer but that shouldn't matter in this case.
I am using a drupal docker image which has data at the /var/www/html directory.
I am attempting to overwrite this data with a drupal site from a local directory on the host system.
According to the docs this is the expected behavior
Mount a host directory as a data volume
In addition to creating a
volume using the -v flag you can also mount a directory from your
Docker engine’s host into a container.
$ docker run -d -P --name web -v /src/webapp:/webapp training/webapp
python app.py
This command mounts the host directory, /src/webapp,
into the container at /webapp. If the path /webapp already exists
inside the container’s image, the /src/webapp mount overlays but does
not remove the pre-existing content. Once the mount is removed, the
content is accessible again. This is consistent with the expected
behavior of the mount command.
However I am finding that the local drupal site files do not exist on the container. My complete workflow is as follows:
docker-compose.yml
drupal:
container_name: empower_drupal
build: ./build/drupal-local-codebase
ports:
- "8888:80"
- "8022:22"
- "443"
#volumes: THIS IS ALSO NOT WORKING
#- /home/sameh/empower-tap:/var/www/html
$ docker-compose up -d
# edit the container by snapshotting it
$ docker commit empower_drupal empower_drupal1
$ docker run -d -P --name empower_drupal2 -v /home/sameh/empower-tap:/var/ww/html empower_drupal1
# snapshot the container to examine it
$ docker commit 9cfeca48efd3 empower_drupal2
$ docker run -t -i empower_drupal2 /bin/bash
The empower_drupal2 container does not have the correct files from the /home/sameh/empower-tap directory.
Why this did not work
Here's what you did, with some annotations.
$ docker-compose up -d
Given your docker-compose.yml, with the volumes section commented out, at this point you have running container, but no volumes mounted.
# edit the container by snapshotting it
$ docker commit empower_drupal empower_drupal1
All you've really done here is made a copy of the image you had already, unless your container makes changes to itself on startup.
$ docker run -d -P --name empower_drupal2 -v /home/sameh/empower-tap:/var/ww/html empower_drupal1
Here you have run your new copy, mounted a volume. Ok, the files are available in this container now.
# snapshot the container to examine it
$ docker commit 9cfeca48efd3 empower_drupal2
I'm assuming here that you wanted to commit the contents of the volume into the image. That will not work. The commit documentation is clear about this point:
The commit operation will not include any data contained in volumes mounted inside the container.
$ docker run -t -i empower_drupal2 /bin/bash
So, as you found, when you run the image generated by commit, but without volume mounts, the files are not there.
Also, it is not clear in your docker-compose.yml example where the volumes: section was before it was commented out. Currently it seems to be on the left margin, which would not work. It would need to be at the same level as build: and ports: in order to work on your drupal service.
What to do instead
That depends on your goal.
Just copy the files from local
If you literally just want to populate the image with the files from your local system, you can do that in Dockerfile.
COPY local-dir/* /var/www/html
You mentioned that this copy can't work because the directory is not local. Unfortunately that cannot be solved easily with something like a symlink. Your best option is to copy the directory to the local context before building. Docker does not plan to change this behavior.
Override contents for development
A common scenario is you want to use your local directory for development, so that changes are reflected right away instead of doing a rebuild. But when not doing development, you want the files baked into the image.
In that case, start by telling Dockerfile to copy the files into the image, as above. That way an image build will contain them, volume mount or no.
Then, when you are doing development, use volumes: in docker-compose.yml, or the -v flag to docker run, to mount a volume. A volume mount will override whatever is baked into the image, so you will be using your local files. When you're done and the code is ready to go, just do an image build and your final files will be baked into the image for deployment.
Use a volume plus a commit
You can also do this in a slightly roundabout way by mounting the volume, copying the contents elswhere, then committing the result.
# start a container with the volume mounted somewhere
docker run -d -v /home/sameh/empower-tap:/var/www/html_temp [...etc...]
# copy the files elsewhere inside the container
docker exec <container-name> cp -r /var/www/html_temp /var/www/html
# commit the result
docker commit empower_drupal empower_drupal1
Then you should have your mounted volume files in the resulting image.

How to start a chrooted directory as a docker container?

Consider I have a directory, for example /demoenv. I would like to start a binary in it as a docker container.
Essentially, it would work like a chroot, but with the numerous extra features (and, with the numerous disadvantages) what a docker container provides.
In this case, it is absolutely not a problem if it can't take part in the very useful docker image/container committing mechanism (I have an alternate solution for that).
Can I somehow do it?
My first trouble is that in this case, I don't really have an image to start.
My second problem is that the -v volume mount option (parameter of docker run) forbids mounting the root partition with the following message:
docker: Error response from daemon: Invalid bind mount spec "/demoenv:/": Invalid specification: destination can't be '/' in '/demoenv:/'.
Finally I found a workaround.
Unfortunately, simply mounting "/" (either with the VOLUME command in the Dockerfile, or with giving the -v to docker run) doesn't work - it can't mount the root directory as a volume.
Furhtermore, VOLUME in the Dockerfile doesn't seem to work at all.
The best (or, least worse) solution what can be done: mounting the sub-directories as different volumes. Around so:
docker run -d -h demobox --name demobox \
-v /demobox/bin:/bin \
-v /demobox/boot:/boot \
-v /demobox/etc:/etc \
-v /demobox/home:/home \
-v /demobox/lib:/lib \
-v /demobox/opt:/opt\
-v /demobox/root:/root\
-v /demobox/sbin:/sbin\
-v /demobox/srv:/srv\
-v /demobox/usr:/usr\
-v /demobox/var:/var \
demobox
Unfortunately, it also needs to have a fake image to run (it is "fake" because all of its relevant /* directories will be over-mounted by the docker daemon). You can use anything for that (I used the default image of the distribution).
Additional info: as entrypoint, we can also give /sbin/init to the container. In my tries, systemd wasn't able to run in it, but upstart could (apt-get install upstart). Giving /sbin/init as ENTRYPOINT to the Dockerfile, then calling a telinit 3 after starting the container, we can essentially run a docker container as a virtual server.

How to mount a directory inside a docker container on Linux host [duplicate]

This question already has answers here:
Mount directory in Container and share with Host
(3 answers)
Closed 4 years ago.
I would like to mount a directory from a docker container to the local filesystem. the directory is a website root and I need to be able to edit it my local machine using any editor.
I know I can run docker run -v local_path:container_path but doing that it is only creating an empty directory inside the container.
How can one mount a directory inside a docker container on linux host?
It is a bit weird but you can use named volumes for that. Despite host mounted volumes, named ones won't be empied. And you can access the dir. See the example:
docker volume create --name data
docker run -rm=true -v data:/etc ubuntu:trusty
docker volume inspect data
[
{
"Name": "data",
"Driver": "local",
"Mountpoint": "/var/lib/docker/volumes/data/_data",
"Labels": {},
"Scope": "local"
}
]
See the mount point?
mkdir ~/data
sudo -s
cp -r /var/lib/docker/volumes/data/_data/* ~/data
echo "Hello World">~/data/hello.txt
docker run --rm=true -v ~/data:/etc ubuntu:trusty cat /etc/fstab #The content is preserved
docker run --rm=true -v ~/data:/etc ubuntu:trusty cat /etc/hello.txt #And your changes too
It is not exactly you were asking for but depends on your needs it works
Regards
If your goal is to provide a ready to go LAMP, you should use the VOLUMES declaration inside the Dockerfile.
VOLUME volume_path_in_container
The problem is that docker will not mount the file cause they were already present in the path you are creating the volume on. You can go as #Grif-fin said in his comment or modify the entry point of the container so he copy the file you want to expose to the volume at the run time.
You have to insert your datas using the build COPY or ADD command in Dockerfile so the base files will be present in the container.
Then create an entrypoint that will copy file from the COPY path to the volume path.
Then run the container using the -v tag and like -v local_path:volume_path_in_container. Like this, you should have the files inside the container mounted on the local. (At least, it was what I add).
Find an exemple here : https://github.com/titouanfreville/Docker/tree/master/ready_to_go_lamp.
It will avoid to have to build every time and you can provide it from a definitive image.
To be nicer, it would be nice to add an user support so you are owner of the mounted files (if you are not root).
Hope it was useful to you.

Exploring Docker container's file system

I've noticed with docker that I need to understand what's happening inside a container or what files exist in there. One example is downloading images from the docker index - you don't have a clue what the image contains so it's impossible to start the application.
What would be ideal is to be able to ssh into them or equivalent. Is there a tool to do this, or is my conceptualisation of docker wrong in thinking I should be able to do this.
Here are a couple different methods...
A) Use docker exec (easiest)
Docker version 1.3 or newer supports the command exec that behave similar to nsenter. This command can run new process in already running container (container must have PID 1 process running already). You can run /bin/bash to explore container state:
docker exec -t -i mycontainer /bin/bash
see Docker command line documentation
B) Use Snapshotting
You can evaluate container filesystem this way:
# find ID of your running container:
docker ps
# create image (snapshot) from container filesystem
docker commit 12345678904b5 mysnapshot
# explore this filesystem using bash (for example)
docker run -t -i mysnapshot /bin/bash
This way, you can evaluate filesystem of the running container in the precise time moment. Container is still running, no future changes are included.
You can later delete snapshot using (filesystem of the running container is not affected!):
docker rmi mysnapshot
C) Use ssh
If you need continuous access, you can install sshd to your container and run the sshd daemon:
docker run -d -p 22 mysnapshot /usr/sbin/sshd -D
# you need to find out which port to connect:
docker ps
This way, you can run your app using ssh (connect and execute what you want).
D) Use nsenter
Use nsenter, see Why you don't need to run SSHd in your Docker containers
The short version is: with nsenter, you can get a shell into an
existing container, even if that container doesn’t run SSH or any kind
of special-purpose daemon
UPDATE: EXPLORING!
This command should let you explore a running docker container:
docker exec -it name-of-container bash
The equivalent for this in docker-compose would be:
docker-compose exec web bash
(web is the name-of-service in this case and it has tty by default.)
Once you are inside do:
ls -lsa
or any other bash command like:
cd ..
This command should let you explore a docker image:
docker run --rm -it --entrypoint=/bin/bash name-of-image
once inside do:
ls -lsa
or any other bash command like:
cd ..
The -it stands for interactive... and tty.
This command should let you inspect a running docker container or image:
docker inspect name-of-container-or-image
You might want to do this and find out if there is any bash or sh in there. Look for entrypoint or cmd in the json return.
NOTE: This answer relies on commen tool being present, but if there is no bash shell or common tools like ls present you could first add one in a layer if you have access to the Dockerfile:
example for alpine:
RUN apk add --no-cache bash
Otherwise if you don't have access to the Dockerfile then just copy the files out of a newly created container and look trough them by doing:
docker create <image> # returns container ID the container is never started.
docker cp <container ID>:<source_path> <destination_path>
docker rm <container ID>
cd <destination_path> && ls -lsah
see docker exec documentation
see docker-compose exec documentation
see docker inspect documentation
see docker create documentation
In case your container is stopped or doesn't have a shell (e.g. hello-world mentioned in the installation guide, or non-alpine traefik), this is probably the only possible method of exploring the filesystem.
You may archive your container's filesystem into tar file:
docker export adoring_kowalevski > contents.tar
Or list the files:
docker export adoring_kowalevski | tar t
Do note, that depending on the image, it might take some time and disk space.
Before Container Creation :
If you to explore the structure of the image that is mounted inside the container you can do
sudo docker image save image_name > image.tar
tar -xvf image.tar
This would give you the visibility of all the layers of an image and its configuration which is present in json files.
After container creation :
For this there are already lot of answers above. my preferred way to do
this would be -
docker exec -t -i container /bin/bash
The most upvoted answer is working for me when the container is actually started, but when it isn't possible to run and you for example want to copy files from the container this has saved me before:
docker cp <container-name>:<path/inside/container> <path/on/host/>
Thanks to docker cp (link) you can copy directly from the container as it was any other part of your filesystem.
For example, recovering all files inside a container:
mkdir /tmp/container_temp
docker cp example_container:/ /tmp/container_temp/
Note that you don't need to specify that you want to copy recursively.
The file system of the container is in the data folder of docker, normally in /var/lib/docker. In order to start and inspect a running containers file system do the following:
hash=$(docker run busybox)
cd /var/lib/docker/aufs/mnt/$hash
And now the current working directory is the root of the container.
you can use dive to view the image content interactively with TUI
https://github.com/wagoodman/dive
Try using
docker exec -it <container-name> /bin/bash
There might be possibility that bash is not implemented. for that you can use
docker exec -it <container-name> sh
On Ubuntu 14.04 running Docker 1.3.1, I found the container root filesystem on the host machine in the following directory:
/var/lib/docker/devicemapper/mnt/<container id>/rootfs/
Full Docker version information:
Client version: 1.3.1
Client API version: 1.15
Go version (client): go1.3.3
Git commit (client): 4e9bbfa
OS/Arch (client): linux/amd64
Server version: 1.3.1
Server API version: 1.15
Go version (server): go1.3.3
Git commit (server): 4e9bbfa
In my case no shell was supported in container except sh. So, this worked like a charm
docker exec -it <container-name> sh
The most voted answer is good except if your container isn't an actual Linux system.
Many containers (especially the go based ones) don't have any standard binary (no /bin/bash or /bin/sh). In that case, you will need to access the actual containers file directly:
Works like a charm:
name=<name>
dockerId=$(docker inspect -f {{.Id}} $name)
mountId=$(cat /var/lib/docker/image/aufs/layerdb/mounts/$dockerId/mount-id)
cd /var/lib/docker/aufs/mnt/$mountId
Note: You need to run it as root.
I use another dirty trick that is aufs/devicemapper agnostic.
I look at the command that the container is running e.g. docker ps
and if it's an apache or java i just do the following:
sudo -s
cd /proc/$(pgrep java)/root/
and voilá you're inside the container.
Basically you can as root cd into /proc/<PID>/root/ folder as long as that process is run by the container. Beware symlinks will not make sense wile using that mode.
Only for LINUX
The most simple way that I use was using proc dir, the container must be running in order to inspect the docker container files.
Find out the process id (PID) of the container and store it into some variable
PID=$(docker inspect -f '{{.State.Pid}}' your-container-name-here)
Make sure the container process is running, and use the variable name to get into the container folder
cd /proc/$PID/root
If you want to get through the dir without finding out the PID number, just use this long command
cd /proc/$(docker inspect -f '{{.State.Pid}}' your-container-name-here)/root
Tips:
After you get inside the container, everything you do will affect the actual process of the container, such as stopping the service or changing the port number.
Hope it helps
Note:
This method only works if the container is still running, otherwise, the directory wouldn't exist anymore if the container has stopped or removed
None of the existing answers address the case of a container that exited (and can't be restarted) and/or doesn't have any shell installed (e.g. distroless ones). This one works as long has you have root access to the Docker host.
For a real manual inspection, find out the layer IDs first:
docker inspect my-container | jq '.[0].GraphDriver.Data'
In the output, you should see something like
"MergedDir": "/var/lib/docker/overlay2/03e8df748fab9526594cfdd0b6cf9f4b5160197e98fe580df0d36f19830308d9/merged"
Navigate into this folder (as root) to find the current visible state of the container filesystem.
This will launch a bash session for the image:
docker run --rm -it --entrypoint=/bin/bash
On newer versions of Docker you can run docker exec [container_name] which runs a shell inside your container
So to get a list of all the files in a container just run docker exec [container_name] ls
I wanted to do this, but I was unable to exec into my container as it had stopped and wasn't starting up again due to some error in my code.
What worked for me was to simply copy the contents of the entire container into a new folder like this:
docker cp container_name:/app/ new_dummy_folder
I was then able to explore the contents of this folder as one would do with a normal folder.
For me, this one works well (thanks to the last comments for pointing out the directory /var/lib/docker/):
chroot /var/lib/docker/containers/2465790aa2c4*/root/
Here, 2465790aa2c4 is the short ID of the running container (as displayed by docker ps), followed by a star.
For docker aufs driver:
The script will find the container root dir(Test on docker 1.7.1 and 1.10.3 )
if [ -z "$1" ] ; then
echo 'docker-find-root $container_id_or_name '
exit 1
fi
CID=$(docker inspect --format {{.Id}} $1)
if [ -n "$CID" ] ; then
if [ -f /var/lib/docker/image/aufs/layerdb/mounts/$CID/mount-id ] ; then
F1=$(cat /var/lib/docker/image/aufs/layerdb/mounts/$CID/mount-id)
d1=/var/lib/docker/aufs/mnt/$F1
fi
if [ ! -d "$d1" ] ; then
d1=/var/lib/docker/aufs/diff/$CID
fi
echo $d1
fi
This answer will help those (like myself) who want to explore the docker volume filesystem even if the container isn't running.
List running docker containers:
docker ps
=> CONTAINER ID "4c721f1985bd"
Look at the docker volume mount points on your local physical machine (https://docs.docker.com/engine/tutorials/dockervolumes/):
docker inspect -f {{.Mounts}} 4c721f1985bd
=> [{ /tmp/container-garren /tmp true rprivate}]
This tells me that the local physical machine directory /tmp/container-garren is mapped to the /tmp docker volume destination.
Knowing the local physical machine directory (/tmp/container-garren) means I can explore the filesystem whether or not the docker container is running. This was critical to helping me figure out that there was some residual data that shouldn't have persisted even after the container was not running.
If you are using Docker v19.03, you follow the below steps.
# find ID of your running container:
docker ps
# create image (snapshot) from container filesystem
docker commit 12345678904b5 mysnapshot
# explore this filesystem
docker run -t -i mysnapshot /bin/sh
For an already running container, you can do:
dockerId=$(docker inspect -f {{.Id}} [docker_id_or_name])
cd /var/lib/docker/btrfs/subvolumes/$dockerId
You need to be root in order to cd into that dir. If you are not root, try 'sudo su' before running the command.
Edit: Following v1.3, see Jiri's answer - it is better.
another trick is to use the atomic tool to do something like:
mkdir -p /path/to/mnt && atomic mount IMAGE /path/to/mnt
The Docker image will be mounted to /path/to/mnt for you to inspect it.
My preferred way to understand what is going on inside container is:
expose -p 8000
docker run -it -p 8000:8000 image
Start server inside it
python -m SimpleHTTPServer
If you are using the AUFS storage driver, you can use my docker-layer script to find any container's filesystem root (mnt) and readwrite layer :
# docker-layer musing_wiles
rw layer : /var/lib/docker/aufs/diff/c83338693ff190945b2374dea210974b7213bc0916163cc30e16f6ccf1e4b03f
mnt : /var/lib/docker/aufs/mnt/c83338693ff190945b2374dea210974b7213bc0916163cc30e16f6ccf1e4b03f
Edit 2018-03-28 :
docker-layer has been replaced by docker-backup
The docker exec command to run a command in a running container can help in multiple cases.
Usage: docker exec [OPTIONS] CONTAINER COMMAND [ARG...]
Run a command in a running container
Options:
-d, --detach Detached mode: run command in the background
--detach-keys string Override the key sequence for detaching a
container
-e, --env list Set environment variables
-i, --interactive Keep STDIN open even if not attached
--privileged Give extended privileges to the command
-t, --tty Allocate a pseudo-TTY
-u, --user string Username or UID (format:
[:])
-w, --workdir string Working directory inside the container
For example :
1) Accessing in bash to the running container filesystem :
docker exec -it containerId bash
2) Accessing in bash to the running container filesystem as root to be able to have required rights :
docker exec -it -u root containerId bash
This is particularly useful to be able to do some processing as root in a container.
3) Accessing in bash to the running container filesystem with a specific working directory :
docker exec -it -w /var/lib containerId bash
Often times I only need to explore the docker filesystem because my build won't run, so docker run -it <container_name> bash is impractical. I also do not want to waste time and memory copying filesystems, so docker cp <container_name>:<path> <target_path> is impractical too.
While possibly unorthodox, I recommend re-building with ls as the final command in the Dockerfile:
CMD [ "ls", "-R" ]
I've found the easiest, all-in-one solution to View, Edit, Copy files with a GUI app inside almost any running container.
mc editing files in docker
inside the container install mc and ssh: docker exec -it <container> /bin/bash, then with prompt install mc and ssh packages
in same exec-bash console, run mc
press ESC then 9 then ENTER to open menu and select "Shell link..."
using "Shell link..." open SCP-based filesystem access to any host with ssh server running (including the one running docker) by it's IP address
do your job in graphical UI
this method overcomes all issues with permissions, snap isolation etc., allows to copy directly to any machine and is the most pleasant to use for me
I had an unknown container, that was doing some production workload and did not want to run any command.
So, I used docker diff.
This will list all files that the container had changed and therefore good suited to explore the container file system.
To get only a folder you can just use grep:
docker diff <container> | grep /var/log
It will not show files from the docker image. Depending on your use case this can help or not.
Late to the party, but in 2022 we have VS Code

Resources