Share folders between host and container in docker for Windows - linux

I'm using the lastest Docker for Windows, which needs Hyper-V to be enabled, and virtualbox cannot be used in this case.
I've installed the the ubuntu container and started it, I want to mount C:\Users\username in the docker container. I've tried the following methods.
docker run -t -i -v /c/Users/username:/mnt/c ubuntu /bin/bash
docker run -d -P --name windows -v C:\Users\username:/mnt/c ubuntu /bin/bash
docker run -t -i -v /c/Users/username:/mnt/c ubuntu /bin/bash
None of them worked. I noticed that /mnt/c was created automatically, but it contained nothing.
Given that Docker for Windows is pretty new, most information I found online was about Boot2Docker or virtualbox, which is useless to me.

Related

How to find out which Linux is installed inside docker image?

I am new to docker and this is just a fascinating tool. However, I can't understand one thing about it. Simple Dockerfile usually begins with OS name and version, like:
FROM ubuntu:xenial
....
But which Linux OS will be used for Dockerfile like
FROM perl
....
or
FROM python:3.6
....
Of course I can find this out by running a container from this image and printing out the OS info, like:
docker run -it --rm perl bash
# cat /etc/*-release
or
docker run -it --rm python:3.6 bash
# cat /etc/*-release
BTW, In both cases the OS is "Debian GNU/Linux 10 (buster)".
So, my questions are:
How do I find out which OS will be run for a specific docker image without actually creating a docker container from it (the docker inspect command does not provide this info: docker inspect perl | grep -i Debian)
How do I change the OS type for existing docker image. For example, I have an image that uses Ubuntu 14.04, and I want to change it to Ubuntu 18.04..
Thank you for your help:)
A docker image doesn't need an OS. There's a possibility of extending the scratch image which is purposely empty and the container may only contain one binary or some volume.
Having an entire OS is possible but also misleading: The host shares its kernel with the container. (This is not a virtual machine.)
That means that no matter what "OS" you are running, the same kernel in the container is found:
Both:
docker run --rm -it python:3.6 uname -a
docker run --rm -it python:3.6-alpine uname -a
will report the same kernel of your host machine.
So you have to look into different ways:
docker run --rm -it python:3.6 cat /etc/os-release
or
lsb_release -sirc
or for Cent OS:
cat /etc/issue
In stead of scratch, a lot of images are also alpine-based to avoid the size overhead. An ubuntu base image can easily have 500MB fingerprint whereas alpine uses around 5MB; so I rather check for that as well.
Also avoid the trap of manually installing everything onto one Ubuntu image inside one big Dockerfile. Docker works best if each service is its own container that you link together. (For that check out docker-compose.)
In the end, you as an user shouldn't care about the OS of an image, but rather its size. Only as a developer of the Dockerfile is it relevant to know the OS and that you'll find out either by looking into the Dockerfile the image was built (if it's on docker hub you can read it there).
You basically have to look what was used to create your image an use the appropriate tools for the job. (Debian-based images use apt-get, alpine uses apk, and Fedora uses yum.)
How do I find out which OS will be run for a specific docker image without actually creating a docker container from it
The only way to determine what os is being used is as you have described: spawn a container and print the os information. There is no metadata that says "this image was build using <x>".
In many (but not all) situations, this information may not be especially important.
How do I change the OS type for existing docker image. For example, I have an image that uses Ubuntu 14.04, and I want to change it to Ubuntu 18.04..
If you have access to the Dockerfile used to build the image, you can of course change the base image (the image named in the FROM line) and build a new one, but you may find that this requires a number of other changes due to different software versions in your updated image.
You can use "docker cp" to extract the "/etc/os-release" file without starting the container:
$ docker pull ubuntu:latest
Status: Image is up to date for ubuntu:latest
$ docker create ubuntu:latest
2e5da8bf02312870acd0436e0cc4eb28fbcc998f766cd9639c37101f65739553
$ docker cp -L 2e5da8bf02312870acd0436e0cc4eb28fbcc998f766cd9639c37101f65739553:/etc/os-release .
$ docker rm 2e5da8bf02312870acd0436e0cc4eb28fbcc998f766cd9639c37101f65739553
$ cat ./os-release
NAME="Ubuntu"
VERSION="20.04.2 LTS (Focal Fossa)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 20.04.2 LTS"
VERSION_ID="20.04"
HOME_URL="https://www.ubuntu.com/"
SUPPORT_URL="https://help.ubuntu.com/"
BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
VERSION_CODENAME=focal
UBUNTU_CODENAME=focal
Note: I had to use "docker cp -L" because /etc/os-release is a symlink on ubuntu:latest.
Honestly, I find this to be a lot of trouble just to avoid starting the container, and it requires the "/etc/os-release" file to be present. If you're willing to (very) briefly run the container, I find this more convenient, and a little more robust. Note: it's very important to specify --entrypoint="", otherwise the container will start invoking its normal startup routine!
$ docker run --rm -i -a STDOUT --entrypoint="" \
ubuntu:latest sh -c 'head -n 1000 /etc/hostname /etc/*[Rr][Ee][Ll]*'
==> /etc/hostname <==
b243ff33e245
==> /etc/lsb-release <==
DISTRIB_ID=Ubuntu
DISTRIB_RELEASE=20.04
DISTRIB_CODENAME=focal
DISTRIB_DESCRIPTION="Ubuntu 20.04.2 LTS"
==> /etc/os-release <==
NAME="Ubuntu"
VERSION="20.04.2 LTS (Focal Fossa)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 20.04.2 LTS"
VERSION_ID="20.04"
HOME_URL="https://www.ubuntu.com/"
SUPPORT_URL="https://help.ubuntu.com/"
BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
VERSION_CODENAME=focal
UBUNTU_CODENAME=focal
Here's the same command against "alpine:latest":
docker run --rm -i -a STDOUT --entrypoint="" \
alpine:latest 'sh' '-c' 'head -n 1000 /etc/hostname /etc/*[Rr][Ee][Ll]*'
==> /etc/hostname <==
a8521c768aeb
==> /etc/alpine-release <==
3.13.4
==> /etc/os-release <==
NAME="Alpine Linux"
ID=alpine
VERSION_ID=3.13.4
PRETTY_NAME="Alpine Linux v3.13"
HOME_URL="https://alpinelinux.org/"
BUG_REPORT_URL="https://bugs.alpinelinux.org/"
Note: I add "/etc/hostname" to the list of files to "head" to make sure it finds 2 or more files, to ensure "head" to uses its "==> file <==" output style. Whereas if it only runs against a single file it doesn't print the filename.

Running desktop enviroment in docker in headless linux

is it possible to run in headless linux, to be exact, linux with no desktop enviroment with GUI from inside docker.
(only if couldt be done differently with x server of some sort, but I would rather run everything within docker)
I want to run GUI only on occasions and I dont want it to share the userspace with the base system programs. Also I dont want to preserve the DE till the next occasion that is needed.
Sure it's possible!
First let's create a docker volume to store the X11 socket:
docker volume create --name xsocket
Now we can create an image with X Server:
FROM ubuntu
RUN apt-get update && \
DEBIAN_FRONTEND='noninteractive' apt-get install -y xorg
CMD /usr/bin/X :0 -nolisten tcp vt1
Let us build it and start it and store the X11 socket in xsocket docker volume:
docker build . -t docker-x-server:latest
docker run --privileged -v xsocket:/tmp/.X11-unix -d docker-x-server:latest
Now we can run a GUI application in another docker container (yay!) and point it to our X server using xsocket volume:
docker run --rm -it -e DISPLAY=:0 -v xsocket:/tmp/.X11-unix:ro stefanscherer/xeyes
If you need input (like keyboard) install xserver-xorg-input-evdev package and add -v /run/udev/data:/run/udev/data since there's no udev in containers by default.
You can even get rid of --privileged flag by granting SYS_TTY_CONFIG capability and binding some devices into container:
docker run --name docker-x-server --device=/dev/input --device=/dev/console --device=/dev/dri --device=/dev/fb0 --device=/dev/tty --device=/dev/tty1 --device=/dev/vga_arbiter --device=/dev/snd --device=/dev/psaux --cap-add=SYS_TTY_CONFIG -v xsocket:/tmp/.X11-unix -d docker-x-server:latest

Exposing a TTY device in a docker container with docker for mac

I'm trying to expose an Arduino that's plugged into my mac to a linux instance I'm running in Docker for Mac (no vm).
The Arduino exposes itself as /dev/tty.usbserialXXX. I'm using the node docker image which is based upon ubuntu.
The command I'm running is
$ docker run --rm -it -v `pwd`:/app --device /dev/tty.usbmodem1421 node bash
docker: Error response from daemon: linux runtime spec devices: error gathering device information while adding custom device "/dev/tty.usbmodem1421": lstat /dev/tty.usbmodem1421: no such file or directory.
If I try to use --privileged
$ docker run --rm -it -v `pwd`:/app --device /dev/tty.usbmodem1421 --privileged node bash
root#8f18fdbcf64d:/# ls /dev/tty.*
ls: cannot access /dev/tty.*: No such file or directory
Nothing is exposed!
I'm using this to expose serial devices to test serial drivers in linux.
The problem here is largely that you're not running Docker on your mac. You're running a Linux VM on your Mac, inside which you're running Docker. This means that it's easy to expose the /dev tree inside the Linux VM to Docker, but less easy to expose devices from your Mac, absent some kind of support from the hypervisor.
Using the legacy "Docker Toolbox" for Mac, which is built around VirtualBox, it ought to be possible to assign a USB device to the VirtualBox host running Docker (which would in turn allow you to expose it to your Docker containers).
This GitHub issue talks about this particular situation and has links to helpful documentation.
I don't know if this sort of feature is currently available with the hypervisor used in the newer "Docker for Mac" package.
The Arduino devise that is listed at /dev/tty.usbserialXXX could be a symlink to the device, and not the actual path. To read the symlink path try using
docker run --rm -it -v `pwd`:/app --device=/dev/$(readlink /dev/tty.usbmodem1421) node bash
There was an issue open for this some time back. Do check if it solves your problem

docker-machine breaks docker native client on linux

I am on Ubuntu and decided to use docker-machine to run some docker swarm tests. Here you execute
eval $(docker-machine env xxxxx)
and with that your native docker client points to that machine/vm. However, after the tests I wan't the docker command to point to my local docker client/daemon/whatever and executed
eval $(docker-machine env -u)
which is supposed to unset the environment variables. But now I get this error
docker: Cannot connect to the Docker daemon. Is the docker daemon running on this host?.
See 'docker run --help'.
I've had to create a docker machine on VirtualBox called default, point to that machine and run my commands there. But its pretty lame since I feel like I am back on Windows and one of the reason I came to Ubuntu was better docker integration.
Is there any fix for this?
unset all docker variables
unset ${!DOCKER_*}
regarding the 'can't connect to daemon', ensure you're prepending each docker command with sudo, or to allow your current user to interact with docker use:
sudo groupadd docker
sudo usermod -aG docker $(whoami)
restart docker and
re-login to the terminal

Docker can't connect to docker daemon

After I update my Docker version to 0.8.0, I get an error message while entering sudo docker version:
Client version: 0.8.0
Go version (client): go1.2
Git commit (client): cc3a8c8
2014/02/19 12:54:16 Can't connect to docker daemon. Is 'docker -d' running on this host?
And I've followed the instructions and entered command sudo docker -d, and I got this:
[/var/lib/docker|2462000b] +job initserver()
[/var/lib/docker|2462000b.initserver()] Creating server
open /var/lib/docker/aufs/layers/cf2414da53f9bcfaa48bc3d58360d7f1cfd3784e4fe51fbef95197709dfc285d: no such file or directory[/var/lib/docker|2462000b] -job initserver() = ERR (1)
2014/02/19 12:55:57 initserver: open /var/lib/docker/aufs/layers/cf2414da53f9bcfaa48bc3d58360d7f1cfd3784e4fe51fbef95197709dfc285d: no such file or directory
How do I solve the problem?
Linux
The Post-installation steps for Linux documentation reveals the following steps:
Create the docker group.
sudo groupadd docker
Add the user to the docker group.
sudo usermod -aG docker $(whoami)
Log out and log back in to ensure docker runs with correct permissions.
Start docker.
sudo service docker start
Mac OS X
As Dayel Ostraco says is necessary to add environments variables:
docker-machine start # Start virtual machine for docker
docker-machine env # It's helps to get environment variables
eval "$(docker-machine env default)" # Set environment variables
The docker-machine start command outputs the comments to guide the process.
Linux
To run docker daemon on Linux (from CLI), run:
$ sudo service docker start # Ubuntu/Debian
Note: Skip the $ character when copy and pasting.
On RedHat/CentOS, run: sudo systemctl start docker.
To initialize the "base" filesystem, run:
$ sudo service docker stop
$ sudo rm -rf /var/lib/docker
$ sudo service docker start
or manually like:
$ sudo docker -d --storage-opt dm.basesize=20G
Install docker-machine on Linux
To install machine binaries on Linux:
locally:
install -vm755 <(curl -L https://github.com/docker/machine/releases/download/v0.5.3/docker-machine_linux-amd64) $HOME/bin/docker-machine
global:
sudo bash -c 'install -vm755 <(curl -L https://github.com/docker/machine/releases/download/v0.5.3/docker-machine_linux-amd64) /usr/local/bin/docker-machine'
macOS
On macOS the docker binary is only a client and you cannot use it to run the docker daemon, because Docker daemon uses Linux-specific kernel features, therefore you can’t run Docker natively in OS X. So you have to install docker-machine in order to create VM and attach to it.
Install docker-machine on macOS
If you don't have docker-machine command yet, install it by using one of the following methods:
Using Brew command: brew install docker-machine docker.
manually from GitHub:
install -v <(curl https://github.com/docker/machine/releases/download/v0.5.3/docker-machine_linux-amd64) /usr/local/bin/docker-machine
See: Get started with Docker for Mac.
Configure docker-machine on macOS
To start Docker Machine via Homebrew, run:
brew services start docker-machine
To create a default machine (if you don't have one, see: docker-machine ls):
docker-machine create --driver virtualbox default
Then set-up the environment for the Docker client:
eval "$(docker-machine env default)"
Then double-check by listing containers:
docker ps
See: Get started with Docker Machine and a local VM.
Install Docker.app on macOS
Alternatively to above solution, you can install a Docker app by:
brew cask install docker
Check this post for more details. See also: Cannot connect to the Docker daemon on macOS
If you are running Docker on OS X, running the following eval has worked for me.
eval "$(docker-machine env default)"
If you'd prefer not to have to run this eval statement on every terminal session, you can add this to your bash_profile:
#Docker
eval "$(docker-machine env default)"
Be sure to restart the terminal session or run source on bash_profile for the changes to take effect.
After a detailed investigation, this issue seems to happen every time after Mac OS X is rebooted (or the Docker virtual machine is restarted) which prevents the Docker client from connecting to the Docker daemon.
To solve the issue, you can either:
A) Reinstall Docker Toolbox using the official installer (https://www.docker.com/products/docker-toolbox), or simply
B) Run the following commands in order:
# First make sure that the virtual machine is running
docker-machine start default
# Regenerate TLS connection certs, requires confirmation
docker-machine regenerate-certs default
# Finally, set env
eval "$(docker-machine env default)"
C) Same as (B), you can also copy and paste the following line to run all of the three commands:
docker-machine start default; docker-machine regenerate-certs default; eval "$(docker-machine env default)"
In case you get the following error:
Error getting SSH command: Something went wrong running an SSH command!
command : cat /etc/os-release
err : exit status 255
output :
just re-run the three commands another time, and it should work the second time.
This usually happens when you are not in the docker group. You can add yourself to the docker group with:
sudo usermod -aG docker yourusername
or
sudo usermod -aG docker $(whoami)
After this, you need to logout and log back into the server.
Alternatively, you can sudo every Docker command.
If all the other solutions above don't work you can try checking the ownership of /var/run/docker.sock:
ls -l /var/run/docker.sock
If you're not the owner then change ownership with the command:
sudo chown *your-username* /var/run/docker.sock
Then you can go ahead and try executing the Docker commands hassle-free :D
You can use the command
sudo service docker stop && sudo service docker start
OR
sudo service docker restart
to simply restart it.
The best way to find out why Docker isn't working will be to run the daemon manually.
$ sudo service docker stop
$ ps aux | grep docker # do this until you don't see /usr/bin/docker -d
$ /usr/bin/docker -d
The Docker daemon logs to STDOUT, so it will start spitting out whatever it's doing.
Here was what my problem was:
[8bf47e42.initserver()] Creating pidfile
2015/01/11 15:20:33 pid file found, ensure docker is not running or delete /var/run/docker.pid
This was because the instance had been cloned from another virtual machine. I just had to remove the pidfile, and everything worked afterwards.
Of course, instead of blindly assuming this will work, I'd suggest running the daemon manually one more time and reviewing the log output for any other errors before starting the service back up.
Do a ps aux | grep docker to see if the daemon is running. If not run /etc/init.d/docker start
If you get the message Can't connect to docker daemon. Is 'docker -d' running on this host?, you can check it by docker version.
If you see the information like Docker Client is running. but Docker Server is not, it's obviously you need to start the Docker server.
In CentOS, you can use service to start or stop the Docker server.
$ sudo service docker stop
$ sudo service docker start
Then, after you type docker version, you will get the information of Docker Client and Docker Server, and the Docker daemon has been started.
Use Docker CE app
macOS
Use the new Docker Community Edition app for macOS. For example:
Uninstall all Docker Homebrew packages which you've installed so far:
brew uninstall docker-compose
brew uninstall docker-machine
brew uninstall docker
Install an app manually or via Homebrew-Cask:
brew install --cask docker
Note: This app will create necessary links to docker, docker-compose, docker-machine, etc.
After running the app, checkout the a Docker whale icon in the status menu.
Now you should be able to use docker, docker-compose, docker-machine commands as usual in the Terminal.
Related:
Brew install docker does not include docker engine?
Cannot connect to the Docker daemon on macOS
Linux/Windows
Download the Docker CE from the download page and follow the instructions.
I have similar problem. I had to logout and login again to shell because I have just installed Docker and following command didn't show in my environment.
export DOCKER_HOST=127.0.0.1:4243 >> ~/.bashrc
I restart Docker after installing it:
$ sudo service docker stop
$ sudo service docker start
And it works.
I have faced this problem, and I restarted Docker using these commands:
$ sudo service docker stop
$ sudo service docker start
But I did not solve my problem, because I forgot to execute my Docker commands without sudo. For those who faces this problem, try to check that.
Try
$ sudo docker info
instead of this:
$ docker info
I have the same error and trying docker-machine regenerate-certs or eval.. did not work for me.
This on OS X 10.11.3 (El Capitan) and Docker v1.10.1. I was able to fix it only by deleting and recreating docker-machine again. Source
If running docker-machine ls, it shows you a similar output to the one below;
DOCKER
Unknown
ERRORS
Unable to query docker version: Cannot
connect to the docker engine endpoint
Try removing your Docker machine with;
docker-machine rm -f default
Where default is your Docker machine name. Then;
docker-machine create -d virtualbox default
Creates a new Docker machine.
Double check that everything looks normal now (no errors or unknown Docker) with:
docker-machine ls
Finally don't forget to run "$(docker-machine env default)" before you continue or run the Docker Quickstart Terminal which does it for you...
I knew that there are plenty of answers already in this post. Just I would like to add one simple answer that is solved the above mentioned problem .
sudo systemctl start docker
Run the above command and it will start all the docker related threads/services.
Try adding the current user to docker group:
sudo usermod -aG docker $USER
Then log out and login.
At April 2020 on MacOS Catalina, you just need to open the desktop application:
I had the same problem - "Can't connect to docker daemon." (except I didn't get any 'file not found' errors on trying to start the server.)
'ps' showed that "/usr/bin/docker -d" was still running
I realised that I'd never actually succeeded in running the server myself though. Every attempt had produced
...
2014/03/24 21:57:29 pid file found, ensure docker is not running or delete /var/run/docker.pid
So I belatedly realised that installing docker had maybe registered the daemon with upstart, which had started it for me. Hence, trying to kill the daemon to manually restart it fails (operation not permitted). So I did a
sudo kill -9 <PID>
on the daemon process. Another daemon immediately took its place, and this new one DOES now let my CLI client connect:
$ sudo docker info
Containers: 0
Images: 0
Driver: aufs
Root Dir: /var/lib/docker/aufs
Dirs: 0
WARNING: No memory limit support
WARNING: No swap limit support
Following Docker's DOC site: Manage Docker as a non-root user
1) Create Docker Group
sudo groupadd docker
2) Make user belong to docker group to get the group's privileges.
sudo usermod -aG docker $USER
Check whether the DOCKER_HOST environment variable is set for your shell.
env | grep DOCKER_HOST
If it exists,
unset DOCKER_HOST
Then this should work:
docker run hello-world
I just had the same issue, running on Amazon AWS.
Here's what I attempted:
Set up docker-machine locally with already existing AWS instance
Used generic setup
It kind of connected, but since the remote port was closed, it failed
After that, the Docker daemon refused to start up, but running dockerd did work...
It was tested following on the remote machine:
service docker start # Also restart, no success
systemctl start docker # Also restart, no success
dockerd # Success
I removed /var/lib/docker and uninstalled everything, but there was no success after reinstallation. Unfortunately I have no logs stored from failures, but docker.service just refused to start.
However, what finally solved my issue was basically:
sudo usermod -aG docker $(whoami)
I got the same problem. In CentOS 6.5:
ps aux |grep `cat /var/run/docker.pid`
If it shows no Docker daemon process exists, then I type:
docker -d
Then Ctrl + D to stop Docker. Because we use the -d option, Docker will run as daemon. Now we can do:
service docker start
Then I can do a docker pull centos. That's all.
NOTE: If these do not work, you can try yum update, and then repeat these again, because I yum install before these.
If you are running on OS X using Docker tool, follow this.
Restart the daemon and configure your environment:
docker-machine restart
And then
docker-machine env
Finally,
eval $(docker-machine env)
To test the daemon is running:
docker ps -a or docker-machine ls. This will list all containers.
The Docker Service may not be running.
If you are on a RedHat/Fedora/CentOS, please try this:
sudo systemctl start docker
If you are on Ubuntu/Debian:
sudo service start docker
Docker will start running on your host and respective port.
Run the following command:
docker context use default
To fix this issue, I had to enable the docker service:
sudo systemctl enable /usr/lib/systemd/system/docker.service
Check if you are using Docker Machine :)
Run docker-machine env default should do the trick.
Because according to documentation:
Docker Machine is a tool that lets you install Docker Engine on
virtual hosts, and manage the hosts with docker-machine commands. You
can use Machine to create Docker hosts on your local Mac or Windows
box, on your company network, in your data center, or on cloud
providers like AWS or Digital Ocean.
Using docker-machine commands, you can start, inspect, stop, and
restart a managed host, upgrade the Docker client and daemon, and
configure a Docker client to talk to your host.
Point the Machine CLI at a running, managed host, and you can run
docker commands directly on that host. For example, run
docker-machine env default to point to a host called default, follow on-screen
instructions to complete env setup, and run docker ps,
docker run hello-world, and so forth.
https://docs.docker.com/machine/overview/
I also had the same issue. The problem was in sockets allocated to docker-daemon and docker-client.
First, permission was not set for the docker-client on docker.sock You can set it using "sudo usermod -aG docker $USER"
Then check your bash file where the docker-client is running, For me it was on 0.0.0.0:2375, while docker-daemon was running on unix socket.(It was set in the configuration file of dockerd).
Just comment the bash-line and it'll work fine.
But if you want to make it work on TCP port instead of unix socket, change the configuration file of dockerd and set it on 0.0.0.0.2375 and keep the line in bash as it is if present or set it to 0.0.0.0:2375.
To fix, you need to issue the following commands in the terminal. I'll explain each step:
# Uninstall Docker from apt packages
$ sudo apt-get remove docker docker.io
# Remove it from the libraries just to be
# sure it's gone forever
$ sudo rm -rf /var/lib/docker/*
Now, if you want to simplify things and get more time, you can run my init script with the parameter installDocker:
# Pull the init script from GitHub
$ wget https://github.com/dminca/dotfiles/blob/master/init
# Add rights to run the script
$ chmod 755 init
# Just run the script with the installDocker parameter
$ ./init installDocker
A reboot is optional, but I suggest you do it to be sure all runs smoothly.
I had the same problem running Docker 1.10 on Ubuntu 14.04 and none of the given answers worked. For me, the fix was to specify the storage driver when running the Docker daemon.
sudo docker daemon --storage-driver=devicemapper

Resources