Docker - no such file or directory - node.js

I'm receiving an error from docker when I run my docker file. It's saying the /var/lib/docker/aufs/layers/xxxx: no such file or directory when I run Docker build .
I have tried numerous ways to remove containers and images so I'm pretty much stock on this one.
Any
The Docker file is:
FROM node:6
RUN git clone https://github.com/preboot/angular2-webpack.git
WORKDIR angular2-webpack
RUN sed -i.bak 's/--port 8080/--host 0.0.0.0 --port 8080/'
package.json RUN npm i
CMD [ "npm", "run", "start" ]
The complete console output is:
Sending build context to Docker daemon
9.728 kB
Step 1 : FROM node:6
6: Pulling from library/node
6a5a5368e0c2: Already exists
7b9457ec39de: Already exists
ff18e19c2db4: Already exists
6a3d69edbe90: Already exists
0ce4b037e17f: Already exists
82252a100d5a: Already exists
Digest:
sha256:db245bde5445eb122d8dc090ba98539a9ef7f56c0ea981ade643695af0d8eaf0
Status: Downloaded newer image for node:6
---> 9873603dc506 Step 2 :
RUN git clone https://github.com/preboot/angular2-webpack.git open
/var/lib/docker/aufs/layers/9319fd93cb6d6718243ff2e65ce5d2aa6122a1bb9211aa9f8e88d85c298727e5:
no such file or directory User:docker-test
Edit
The issue was resolved thanks to #BMitchs' recommendation:
rm -rf /var/lib/docker/*
Uninstall Docker completely
re install docker

With that sort of corruption, I'd give a full docker wipe a try, rm -rf /var/lib/docker/*. Before doing that, backup any data (volumes), then shutdown docker, and you'll need to pull or rebuild all your images again. If there are still problems with aufs, try changing the filesystem driver, e.g. changing to dockerd -s overlay2 in your service startup.
It doesn't hurt to check for common issues, like running out of disk space or old version of the application, first.

try building the image again on a clean machine or using the --no-cache flag, this seems like a caching issue.
Also - In my company, we clone the code into the machine building the image, and then copy the code into the container. In my opinion - it's a better solution, but I think it's a matter of taste.

The data files used by Docker are corrupted. You can execute the following command:
1- If they exist, delete contain and image
docker rm CONTAINER ID
docker rmi IMAGE ID
2- Stop the Docker service (Ubuntu)
service docker stop
3- Start the Docker service (Ubuntu)
service docker start
4- Check Docker service status (Ubuntu)
service docker status

docker system prune -af
worked for me

Related

unauthorized: authentication required error while creating docker nodejs image

I am new to docker and trying to build docker image for my nodejs project.
This is my docker file
FROM node:10
WORKDIR /app
COPY package*.json ./
RUN npm install
COPY . .
EXPOSE 8443
CMD ["node", "index.js"]
Command used to build the image
docker build -r test-project .
It is giving unauthorized: authentication required after few minutes
Step 1/7 : FROM node:10
10: Pulling from library/node
76b8ef87096f: Extracting [====================> ] 18.81MB/45.38MB
2e2bafe8a0f4: Download complete
b53ce1fd2746: Download complete
84a8c1bd5887: Downloading [==============================> ] 30.38MB/49.79MB
7a803dc0b40f: Downloading [========> ] 34.34MB/214.3MB
b800e94e7303: Downloading
0da9fbf60d48: Waiting
04dccde934cf: Waiting
73269890f6fd: Waiting
**unauthorized: authentication required**
I have done authentication before starting build using
docker login -u <username>
Please help with authentication error while building docker image.
Couple of resolutions -
Try a docker logout. It's possible your authentication with Docker Hub has expired, and your token is being sent along with the pull request, even though these images are public and the token is not needed.
Check for updates and update docker
Logout and login again. (If you are working on windows verify with "docker login")
Check the host clock. It should be properly set to the current time
If docker is behind proxy, try commenting proxy and do sudo systemctl daemon-reload and sudo systemctl restart docker
After you do all the above steps, please try docker pull hello-world. Check if that works
On windows you need to make sure that -
You've shared your drive You've added COMPOSE_CONVERT_WINDOWS_PATHS
with value of 1 on you system variables. Try logging out then in from
your terminal (When doing it on Docker for Windows - sometimes the
issue persists).
Check your network connection. Can be a VPN issue.

Docker -- mounting a volume not behaving like regular mount

I am new to docker so I am certain I am doing something wrong. I am also not a php developer but that shouldn't matter in this case.
I am using a drupal docker image which has data at the /var/www/html directory.
I am attempting to overwrite this data with a drupal site from a local directory on the host system.
According to the docs this is the expected behavior
Mount a host directory as a data volume
In addition to creating a
volume using the -v flag you can also mount a directory from your
Docker engine’s host into a container.
$ docker run -d -P --name web -v /src/webapp:/webapp training/webapp
python app.py
This command mounts the host directory, /src/webapp,
into the container at /webapp. If the path /webapp already exists
inside the container’s image, the /src/webapp mount overlays but does
not remove the pre-existing content. Once the mount is removed, the
content is accessible again. This is consistent with the expected
behavior of the mount command.
However I am finding that the local drupal site files do not exist on the container. My complete workflow is as follows:
docker-compose.yml
drupal:
container_name: empower_drupal
build: ./build/drupal-local-codebase
ports:
- "8888:80"
- "8022:22"
- "443"
#volumes: THIS IS ALSO NOT WORKING
#- /home/sameh/empower-tap:/var/www/html
$ docker-compose up -d
# edit the container by snapshotting it
$ docker commit empower_drupal empower_drupal1
$ docker run -d -P --name empower_drupal2 -v /home/sameh/empower-tap:/var/ww/html empower_drupal1
# snapshot the container to examine it
$ docker commit 9cfeca48efd3 empower_drupal2
$ docker run -t -i empower_drupal2 /bin/bash
The empower_drupal2 container does not have the correct files from the /home/sameh/empower-tap directory.
Why this did not work
Here's what you did, with some annotations.
$ docker-compose up -d
Given your docker-compose.yml, with the volumes section commented out, at this point you have running container, but no volumes mounted.
# edit the container by snapshotting it
$ docker commit empower_drupal empower_drupal1
All you've really done here is made a copy of the image you had already, unless your container makes changes to itself on startup.
$ docker run -d -P --name empower_drupal2 -v /home/sameh/empower-tap:/var/ww/html empower_drupal1
Here you have run your new copy, mounted a volume. Ok, the files are available in this container now.
# snapshot the container to examine it
$ docker commit 9cfeca48efd3 empower_drupal2
I'm assuming here that you wanted to commit the contents of the volume into the image. That will not work. The commit documentation is clear about this point:
The commit operation will not include any data contained in volumes mounted inside the container.
$ docker run -t -i empower_drupal2 /bin/bash
So, as you found, when you run the image generated by commit, but without volume mounts, the files are not there.
Also, it is not clear in your docker-compose.yml example where the volumes: section was before it was commented out. Currently it seems to be on the left margin, which would not work. It would need to be at the same level as build: and ports: in order to work on your drupal service.
What to do instead
That depends on your goal.
Just copy the files from local
If you literally just want to populate the image with the files from your local system, you can do that in Dockerfile.
COPY local-dir/* /var/www/html
You mentioned that this copy can't work because the directory is not local. Unfortunately that cannot be solved easily with something like a symlink. Your best option is to copy the directory to the local context before building. Docker does not plan to change this behavior.
Override contents for development
A common scenario is you want to use your local directory for development, so that changes are reflected right away instead of doing a rebuild. But when not doing development, you want the files baked into the image.
In that case, start by telling Dockerfile to copy the files into the image, as above. That way an image build will contain them, volume mount or no.
Then, when you are doing development, use volumes: in docker-compose.yml, or the -v flag to docker run, to mount a volume. A volume mount will override whatever is baked into the image, so you will be using your local files. When you're done and the code is ready to go, just do an image build and your final files will be baked into the image for deployment.
Use a volume plus a commit
You can also do this in a slightly roundabout way by mounting the volume, copying the contents elswhere, then committing the result.
# start a container with the volume mounted somewhere
docker run -d -v /home/sameh/empower-tap:/var/www/html_temp [...etc...]
# copy the files elsewhere inside the container
docker exec <container-name> cp -r /var/www/html_temp /var/www/html
# commit the result
docker commit empower_drupal empower_drupal1
Then you should have your mounted volume files in the resulting image.

Single file volume mounted as directory in Docker

Docker documentation says that it's possible to mount a single file into a Docker container:
The -v flag can also be used to mount a single file - instead of just directories - from the host machine.
$ docker run --rm -it -v ~/.bash_history:/.bash_history ubuntu /bin/bash
This will drop you into a bash shell in a new container, you will have your bash history from the host and when you exit the container, the host will have the history of the commands typed while in the container.
When I try that however the file mounts as a directory:
tom#u ~/project $ docker run --rm -it -v file.json:/file.json test
total 80K
drwxr-xr-x 9 root root 4.0K Dec 7 12:58 .
drwxr-xr-x 63 root root 4.0K Dec 7 12:58 ..
drwxr-xr-x 2 root root 4.0K Dec 4 16:10 file.json
My Dockerfile looks like this:
FROM ubuntu:14.04
MAINTAINER Tom
CMD ["ls", "-lah", "/test"]
Docker version is 1.9.1, build a34a1d5.
Is this a documentation issue, a misunderstanding on my side, or is there something else going on?
Maybe that's clear in the answers above... but it took me some time to figure it out in my case.
The underlying reason causing the file being shared with -v to appear as a directory instead of a file is that Docker could not find the file on the host. So Docker creates a new directory in the container with the name being the name of the non existing file on the host as docker thinks that the user just want to share a volume/directory that will be created in the future.
So in the problem reported above, if you used a relative directory in the -v command and docker does not understand relative directories, that means that the file was not found on the host and so docker created a directory. And the answer above which suggests to use $(pwd) will be the correct solution when the problem is due to a relative directory.
But for those reading this page who are not using a relative directory and are having the same problem... then try to understand why the file is missing on the host.
It could just be a stupid typo...
It could be that you're running the "docker run" command from a client which spawns the docker container on a different host and the file being shared does not exist on that different host. The file being shared with -v must exist on the host where the docker agent will spawn the container... not necessarily on the client where the "docker run -v ..." command is executed (although they will be the same in many cases).
There are other possible explanations above for Mac and Windows... that could be it too.
So the file missing from the host is the problem... troubleshoot the problem in your setup... using $(pwd) could be the solution but not always.
test is the name of your image that you have built with 'docker build -t test', not a /test folder.
Try a Dockerfile with:
CMD ["ls", "-lah", "/"]
or
CMD ["cat", "/file.json"]
And:
docker run --rm -it -v $(pwd)/file.json:/file.json test
Note the use of $(pwd) in order to mount a file with its full absolute path (relative paths are not supported)
By using $(pwd), you will get an absolute path which does exists, and respect the case, as opposed to a file name or path which might not exist.
An non-existing host path would be mounted as a folder in the container.
When running docker inside docker (by mounting /var/run/docker.sock for example), you need to be aware that if you do mounts inside docker, the filepaths that are used are always the one on your host.
So if on your host you do the following mount :
-v /tmp/foobar.txt:/my/path/foobar.txt
you should not do the following mount inside docker :
-v /my/path/foobar.txt:/my/other/path.txt
but instead, use the host filepath, eg :
-v /tmp/foobar.txt:/my/other/path.txt
I spent a bit fighting with and diagnosing this issue running docker on Windows. This could also effect people running on Mac OSX, so I add an answer here for people possibly having an issue in those environments as my search brought me to this place and to add an explanation of what appears to be happening in docker.
In Windows or Mac OSX your docker is actually running in a boot2docker VM and only the users directory is actually shared by default. On Windows, this user directory is shared as /c/Users/, however in the MinGW shell shipped with Docker Machine, the drive can be accessed as /C or /c, so this can drive you nuts if you forget the docker commands are actually running against the boot2docker VM and your file paths have to exist on the boot2docker VM and be specified in the manner that they exist there because what appears to be occurring in docker is that instead of giving a warning or error that the directory/file does not exist, docker silently creates the specified source as a directory in the boot2docker VM so there is no ready output to indicate that you are doing anything incorrectly.
So, as in the answer above, if your file is mounting as a directory, then check that you are providing an absolute path. For Windows and Mac OSX check that the absolute path that you are mounting exists in your boot2docker VM.
A case where Docker might not find the file, even though you're sure it exists
As edi9999 pointed out, if you tell the docker daemon to mount a file, it won't look into your current container's filesystem, it will look into the filesystem where the daemon is running.
You might have this problem if your docker daemon is running elsewhere for some reason.
❯ docker run --rm -it -v /var/run/docker.sock:/var/run/docker.sock docker
/ # echo "bar" > /foo
/ # docker run --rm -v /foo:/foo ubuntu bash -c 'cat foo'
cat: foo: Is a directory
Docker can't find the /foo file on it's host, so it (helpfully?) creates a directory there so at least you've mounted something.
A Workaround
You can work around this by mounting a host directory into the outer container, and then using that directory for the volume you want to appear in the inner container:
❯ docker run --rm -it -v /var/run/docker.sock:/var/run/docker.sock -v /dev/shm:/dev/shm docker
/ # echo "bar" > /dev/shm/foo
/ # docker run --rm -v /dev/shm/foo:/dev/shm/foo ubuntu bash -c 'cat /dev/shm/foo'
bar
This makes the path /dev/shm/foo refer to the same file in either context, so you can reference the file from the outer container, and the daemon will find it on the the host, which means it will show up as itself in the inner container, rather than as a directory.
For "Docker for Mac/Windows" users, make sure the volume you're trying to mount from your host is part of your "File sharing" preferences:
There is a simple solution for those who use the VirtualBox machine.
By default, the C:/User folder is added. If your project is in C:/projects, add this folder to make it available in VB (with automount).
I had the same problem as being discussed here with Docker on my MacBook and none of the suggestions worked for me. Turns out that the issue was that I did not have permissions to the file I was trying to mount (it was owned by root). I copied it to my user path and changed ownership to myself and then the file mounted as a file and not a directory. I think, if you don't have permissions, Docker interprets this as the file simply not existing and then proceeds to mount it as a new directory
In my case, on MacOS, I had something like this
version: "3.7"
services:
if_django:
build: ./web/
image: if_django
restart: always
container_name: cf_django
command: >
sh -c "python manage.py collectstatic --noinput
&& uwsgi --ini core.uwsgi.ini"
volumes:
- ./logs/core.log:/code/core.log
- uwsgi-data:/tmp/uwsgi/
- web-static:/code/static/
- web-static:/var/www/core/assets/
if_nginx:
build: ./nginx/
image: if_nginx
restart: always
container_name: cf_nginx
volumes:
- uwsgi-data:/tmp/uwsgi/
- web-static:/var/www/core/assets/:ro
- ./logs:/var/log/nginx/
ports:
- 8800:80
depends_on:
- if_django
volumes:
uwsgi-data:
web-static:
And ./logs/core.log file was created as a directory which made the container cf_django fail to start up.
My solution?
mkdir -p logs && touch logs/core.log && docker-compose up --build -d
I will share my experience, I tend to change my passwords a lot. I had to "reset credentials" in Docker Desktop For Windows for files to be mounted correctly (of course, you should make sure that the files are in the shared paths)

Exploring Docker container's file system

I've noticed with docker that I need to understand what's happening inside a container or what files exist in there. One example is downloading images from the docker index - you don't have a clue what the image contains so it's impossible to start the application.
What would be ideal is to be able to ssh into them or equivalent. Is there a tool to do this, or is my conceptualisation of docker wrong in thinking I should be able to do this.
Here are a couple different methods...
A) Use docker exec (easiest)
Docker version 1.3 or newer supports the command exec that behave similar to nsenter. This command can run new process in already running container (container must have PID 1 process running already). You can run /bin/bash to explore container state:
docker exec -t -i mycontainer /bin/bash
see Docker command line documentation
B) Use Snapshotting
You can evaluate container filesystem this way:
# find ID of your running container:
docker ps
# create image (snapshot) from container filesystem
docker commit 12345678904b5 mysnapshot
# explore this filesystem using bash (for example)
docker run -t -i mysnapshot /bin/bash
This way, you can evaluate filesystem of the running container in the precise time moment. Container is still running, no future changes are included.
You can later delete snapshot using (filesystem of the running container is not affected!):
docker rmi mysnapshot
C) Use ssh
If you need continuous access, you can install sshd to your container and run the sshd daemon:
docker run -d -p 22 mysnapshot /usr/sbin/sshd -D
# you need to find out which port to connect:
docker ps
This way, you can run your app using ssh (connect and execute what you want).
D) Use nsenter
Use nsenter, see Why you don't need to run SSHd in your Docker containers
The short version is: with nsenter, you can get a shell into an
existing container, even if that container doesn’t run SSH or any kind
of special-purpose daemon
UPDATE: EXPLORING!
This command should let you explore a running docker container:
docker exec -it name-of-container bash
The equivalent for this in docker-compose would be:
docker-compose exec web bash
(web is the name-of-service in this case and it has tty by default.)
Once you are inside do:
ls -lsa
or any other bash command like:
cd ..
This command should let you explore a docker image:
docker run --rm -it --entrypoint=/bin/bash name-of-image
once inside do:
ls -lsa
or any other bash command like:
cd ..
The -it stands for interactive... and tty.
This command should let you inspect a running docker container or image:
docker inspect name-of-container-or-image
You might want to do this and find out if there is any bash or sh in there. Look for entrypoint or cmd in the json return.
NOTE: This answer relies on commen tool being present, but if there is no bash shell or common tools like ls present you could first add one in a layer if you have access to the Dockerfile:
example for alpine:
RUN apk add --no-cache bash
Otherwise if you don't have access to the Dockerfile then just copy the files out of a newly created container and look trough them by doing:
docker create <image> # returns container ID the container is never started.
docker cp <container ID>:<source_path> <destination_path>
docker rm <container ID>
cd <destination_path> && ls -lsah
see docker exec documentation
see docker-compose exec documentation
see docker inspect documentation
see docker create documentation
In case your container is stopped or doesn't have a shell (e.g. hello-world mentioned in the installation guide, or non-alpine traefik), this is probably the only possible method of exploring the filesystem.
You may archive your container's filesystem into tar file:
docker export adoring_kowalevski > contents.tar
Or list the files:
docker export adoring_kowalevski | tar t
Do note, that depending on the image, it might take some time and disk space.
Before Container Creation :
If you to explore the structure of the image that is mounted inside the container you can do
sudo docker image save image_name > image.tar
tar -xvf image.tar
This would give you the visibility of all the layers of an image and its configuration which is present in json files.
After container creation :
For this there are already lot of answers above. my preferred way to do
this would be -
docker exec -t -i container /bin/bash
The most upvoted answer is working for me when the container is actually started, but when it isn't possible to run and you for example want to copy files from the container this has saved me before:
docker cp <container-name>:<path/inside/container> <path/on/host/>
Thanks to docker cp (link) you can copy directly from the container as it was any other part of your filesystem.
For example, recovering all files inside a container:
mkdir /tmp/container_temp
docker cp example_container:/ /tmp/container_temp/
Note that you don't need to specify that you want to copy recursively.
The file system of the container is in the data folder of docker, normally in /var/lib/docker. In order to start and inspect a running containers file system do the following:
hash=$(docker run busybox)
cd /var/lib/docker/aufs/mnt/$hash
And now the current working directory is the root of the container.
you can use dive to view the image content interactively with TUI
https://github.com/wagoodman/dive
Try using
docker exec -it <container-name> /bin/bash
There might be possibility that bash is not implemented. for that you can use
docker exec -it <container-name> sh
On Ubuntu 14.04 running Docker 1.3.1, I found the container root filesystem on the host machine in the following directory:
/var/lib/docker/devicemapper/mnt/<container id>/rootfs/
Full Docker version information:
Client version: 1.3.1
Client API version: 1.15
Go version (client): go1.3.3
Git commit (client): 4e9bbfa
OS/Arch (client): linux/amd64
Server version: 1.3.1
Server API version: 1.15
Go version (server): go1.3.3
Git commit (server): 4e9bbfa
In my case no shell was supported in container except sh. So, this worked like a charm
docker exec -it <container-name> sh
The most voted answer is good except if your container isn't an actual Linux system.
Many containers (especially the go based ones) don't have any standard binary (no /bin/bash or /bin/sh). In that case, you will need to access the actual containers file directly:
Works like a charm:
name=<name>
dockerId=$(docker inspect -f {{.Id}} $name)
mountId=$(cat /var/lib/docker/image/aufs/layerdb/mounts/$dockerId/mount-id)
cd /var/lib/docker/aufs/mnt/$mountId
Note: You need to run it as root.
I use another dirty trick that is aufs/devicemapper agnostic.
I look at the command that the container is running e.g. docker ps
and if it's an apache or java i just do the following:
sudo -s
cd /proc/$(pgrep java)/root/
and voilá you're inside the container.
Basically you can as root cd into /proc/<PID>/root/ folder as long as that process is run by the container. Beware symlinks will not make sense wile using that mode.
Only for LINUX
The most simple way that I use was using proc dir, the container must be running in order to inspect the docker container files.
Find out the process id (PID) of the container and store it into some variable
PID=$(docker inspect -f '{{.State.Pid}}' your-container-name-here)
Make sure the container process is running, and use the variable name to get into the container folder
cd /proc/$PID/root
If you want to get through the dir without finding out the PID number, just use this long command
cd /proc/$(docker inspect -f '{{.State.Pid}}' your-container-name-here)/root
Tips:
After you get inside the container, everything you do will affect the actual process of the container, such as stopping the service or changing the port number.
Hope it helps
Note:
This method only works if the container is still running, otherwise, the directory wouldn't exist anymore if the container has stopped or removed
None of the existing answers address the case of a container that exited (and can't be restarted) and/or doesn't have any shell installed (e.g. distroless ones). This one works as long has you have root access to the Docker host.
For a real manual inspection, find out the layer IDs first:
docker inspect my-container | jq '.[0].GraphDriver.Data'
In the output, you should see something like
"MergedDir": "/var/lib/docker/overlay2/03e8df748fab9526594cfdd0b6cf9f4b5160197e98fe580df0d36f19830308d9/merged"
Navigate into this folder (as root) to find the current visible state of the container filesystem.
This will launch a bash session for the image:
docker run --rm -it --entrypoint=/bin/bash
On newer versions of Docker you can run docker exec [container_name] which runs a shell inside your container
So to get a list of all the files in a container just run docker exec [container_name] ls
I wanted to do this, but I was unable to exec into my container as it had stopped and wasn't starting up again due to some error in my code.
What worked for me was to simply copy the contents of the entire container into a new folder like this:
docker cp container_name:/app/ new_dummy_folder
I was then able to explore the contents of this folder as one would do with a normal folder.
For me, this one works well (thanks to the last comments for pointing out the directory /var/lib/docker/):
chroot /var/lib/docker/containers/2465790aa2c4*/root/
Here, 2465790aa2c4 is the short ID of the running container (as displayed by docker ps), followed by a star.
For docker aufs driver:
The script will find the container root dir(Test on docker 1.7.1 and 1.10.3 )
if [ -z "$1" ] ; then
echo 'docker-find-root $container_id_or_name '
exit 1
fi
CID=$(docker inspect --format {{.Id}} $1)
if [ -n "$CID" ] ; then
if [ -f /var/lib/docker/image/aufs/layerdb/mounts/$CID/mount-id ] ; then
F1=$(cat /var/lib/docker/image/aufs/layerdb/mounts/$CID/mount-id)
d1=/var/lib/docker/aufs/mnt/$F1
fi
if [ ! -d "$d1" ] ; then
d1=/var/lib/docker/aufs/diff/$CID
fi
echo $d1
fi
This answer will help those (like myself) who want to explore the docker volume filesystem even if the container isn't running.
List running docker containers:
docker ps
=> CONTAINER ID "4c721f1985bd"
Look at the docker volume mount points on your local physical machine (https://docs.docker.com/engine/tutorials/dockervolumes/):
docker inspect -f {{.Mounts}} 4c721f1985bd
=> [{ /tmp/container-garren /tmp true rprivate}]
This tells me that the local physical machine directory /tmp/container-garren is mapped to the /tmp docker volume destination.
Knowing the local physical machine directory (/tmp/container-garren) means I can explore the filesystem whether or not the docker container is running. This was critical to helping me figure out that there was some residual data that shouldn't have persisted even after the container was not running.
If you are using Docker v19.03, you follow the below steps.
# find ID of your running container:
docker ps
# create image (snapshot) from container filesystem
docker commit 12345678904b5 mysnapshot
# explore this filesystem
docker run -t -i mysnapshot /bin/sh
For an already running container, you can do:
dockerId=$(docker inspect -f {{.Id}} [docker_id_or_name])
cd /var/lib/docker/btrfs/subvolumes/$dockerId
You need to be root in order to cd into that dir. If you are not root, try 'sudo su' before running the command.
Edit: Following v1.3, see Jiri's answer - it is better.
another trick is to use the atomic tool to do something like:
mkdir -p /path/to/mnt && atomic mount IMAGE /path/to/mnt
The Docker image will be mounted to /path/to/mnt for you to inspect it.
My preferred way to understand what is going on inside container is:
expose -p 8000
docker run -it -p 8000:8000 image
Start server inside it
python -m SimpleHTTPServer
If you are using the AUFS storage driver, you can use my docker-layer script to find any container's filesystem root (mnt) and readwrite layer :
# docker-layer musing_wiles
rw layer : /var/lib/docker/aufs/diff/c83338693ff190945b2374dea210974b7213bc0916163cc30e16f6ccf1e4b03f
mnt : /var/lib/docker/aufs/mnt/c83338693ff190945b2374dea210974b7213bc0916163cc30e16f6ccf1e4b03f
Edit 2018-03-28 :
docker-layer has been replaced by docker-backup
The docker exec command to run a command in a running container can help in multiple cases.
Usage: docker exec [OPTIONS] CONTAINER COMMAND [ARG...]
Run a command in a running container
Options:
-d, --detach Detached mode: run command in the background
--detach-keys string Override the key sequence for detaching a
container
-e, --env list Set environment variables
-i, --interactive Keep STDIN open even if not attached
--privileged Give extended privileges to the command
-t, --tty Allocate a pseudo-TTY
-u, --user string Username or UID (format:
[:])
-w, --workdir string Working directory inside the container
For example :
1) Accessing in bash to the running container filesystem :
docker exec -it containerId bash
2) Accessing in bash to the running container filesystem as root to be able to have required rights :
docker exec -it -u root containerId bash
This is particularly useful to be able to do some processing as root in a container.
3) Accessing in bash to the running container filesystem with a specific working directory :
docker exec -it -w /var/lib containerId bash
Often times I only need to explore the docker filesystem because my build won't run, so docker run -it <container_name> bash is impractical. I also do not want to waste time and memory copying filesystems, so docker cp <container_name>:<path> <target_path> is impractical too.
While possibly unorthodox, I recommend re-building with ls as the final command in the Dockerfile:
CMD [ "ls", "-R" ]
I've found the easiest, all-in-one solution to View, Edit, Copy files with a GUI app inside almost any running container.
mc editing files in docker
inside the container install mc and ssh: docker exec -it <container> /bin/bash, then with prompt install mc and ssh packages
in same exec-bash console, run mc
press ESC then 9 then ENTER to open menu and select "Shell link..."
using "Shell link..." open SCP-based filesystem access to any host with ssh server running (including the one running docker) by it's IP address
do your job in graphical UI
this method overcomes all issues with permissions, snap isolation etc., allows to copy directly to any machine and is the most pleasant to use for me
I had an unknown container, that was doing some production workload and did not want to run any command.
So, I used docker diff.
This will list all files that the container had changed and therefore good suited to explore the container file system.
To get only a folder you can just use grep:
docker diff <container> | grep /var/log
It will not show files from the docker image. Depending on your use case this can help or not.
Late to the party, but in 2022 we have VS Code

Backup a running Docker container?

Is it possible to backup a running Docker container? Is the export command suitable for doing that?
Posted by one friend in comments
Hi Slava, sorry that your question was closed. For the record, Slava is talking about docker.io, a runtime for linux containers. Yes, docker export is a suitable approach. It will generate a tarball of your entire container filesystem state, and dump it on stdout. So
docker export $CONTAINER_ID > $CONTAINER_ID-backup.tar
will yield a usable tarball. You can re-import the tarball with
docker import - slava/$CONTAINER_ID-backup < $CONTAINER_ID-backup.tar
Note the original metadata (eg id of the original image) will be lost. This should be fixed in future versions of docker.
– Solomon Hykes Apr 2 '13 at 6:35
Adding here so one can find from summary that question was answered. Thanks Solomon!
export has some limitations: it won't export the data volume.
Here's data volume means:
There's a VOLUME defined in the image's Dockerfile.
The container is start with a parameter like this: -v /webapp
More about data: https://docs.docker.com/userguide/dockervolumes/
The way to handle this situation is start a new container with '--volumes-from' parameter to hook on that container, so you can visit the data volume.
Examples:
Visit the data: (in a bash)
docker run -it --volumes-from target_container ubuntu bash
Backup to host: (a postgres container)
docker run -it --volumes-from some_postgres -v /host/path:/container/path --rm ubuntu bash -c "tar -cvf /container/path/postgres-backup.tar /var/lib/postgresql/data"
Using the docker commit is my preferred way to back up a container (started or stopped). Creates an image that you can name:
docker commit - p <container_id> <backup-name>
you can also using save and load.. here's the sample
sudo docker images
awan#google-dev:~/StarCenter/_docker$ sudo docker images
REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE
google_star/_version_1.10 latest 1067d6689697b2 4 days ago 1.666 GB
sudo docker save google_star/_version_1.10 > my_docker-backup.tgz
restore it using (sudo docker load < my_docker-backup.tgz)
check your images using sudo docker images in your new docker machine

Resources