Faillure to launch docker on OSX Yosemite - linux

I'm having troubles to install Docker on Mac OS X Yosemite (10.10.4): when I try with the Docker Quickstart Terminal from the Docker Toolbox I get this:
. '/Applications/Docker/Docker Quickstart Terminal.app/Contents/Resources/Scripts/start.sh'
bash-3.2$ . '/Applications/Docker/Docker Quickstart Terminal.app/Contents/Resources/Scripts/start.sh'
Creating Machine default...
executing: /usr/local/bin/VBoxManage
STDOUT: Oracle VM VirtualBox Command Line Management Interface Version 5.0.2
(C) 2005-2015 Oracle Corporation
All rights reserved.
Usage:
VBoxManage [<general option>] <command>
STDERR:
Creating VirtualBox VM...
Creating SSH key...
Creating disk image...
Creating 20000 MB hard disk image...
Converting from raw image file="stdin" to file="/Users/arbi/.docker/machine/machines/default/disk.vmdk"...
Creating dynamic image with size 20971520000 bytes (20000MB)...
executing: /usr/local/bin/VBoxManage createvm --basefolder /Users/arbi/.docker/machine/machines/default --name default --register
STDOUT: Virtual machine 'default' is created and registered.
UUID: e0f2a54b-b11a-47e2-9f3e-450f6fea78c8
Settings file: '/Users/arbi/.docker/machine/machines/default/default/default.vbox'
STDERR:
VM CPUS: 1
VM Memory: 2048
executing: /usr/local/bin/VBoxManage modifyvm default --firmware bios --bioslogofadein off --bioslogofadeout off --bioslogodisplaytime 0 --biosbootmenu disabled --ostype Linux26_64 --cpus 1 --memory 2048 --acpi on --ioapic on --rtcuseutc on --natdnshostresolver1 off --natdnsproxy1 off --cpuhotplug off --pae on --hpet on --hwvirtex on --nestedpaging on --largepages on --vtxvpid on --accelerate3d off --boot1 dvd
STDOUT:
STDERR:
executing: /usr/local/bin/VBoxManage modifyvm default --nic1 nat --nictype1 82540EM --cableconnected1 on
STDOUT:
STDERR:
using 192.168.99.1 for dhcp address
executing: /usr/local/bin/VBoxManage list hostonlyifs
STDOUT: Name: vboxnet0
GUID: 786f6276-656e-4074-8000-0a0027000000
DHCP: Disabled
IPAddress: 192.168.99.1
NetworkMask: 255.255.255.0
IPV6Address:
IPV6NetworkMaskPrefixLength: 0
HardwareAddress: 0a:00:27:00:00:00
MediumType: Ethernet
Status: Down
VBoxNetworkName: HostInterfaceNetworking-vboxnet0
STDERR:
executing: /usr/local/bin/VBoxManage modifyvm default --nic2 hostonly --nictype2 82540EM --hostonlyadapter2 vboxnet0 --cableconnected2 on
STDOUT:
STDERR:
executing: /usr/local/bin/VBoxManage storagectl default --name SATA --add sata --hostiocache on
STDOUT:
STDERR:
executing: /usr/local/bin/VBoxManage storageattach default --storagectl SATA --port 0 --device 0 --type dvddrive --medium /Users/arbi/.docker/machine/machines/default/boot2docker.iso
STDOUT:
STDERR:
executing: /usr/local/bin/VBoxManage storageattach default --storagectl SATA --port 1 --device 0 --type hdd --medium /Users/arbi/.docker/machine/machines/default/disk.vmdk
STDOUT:
STDERR:
executing: /usr/local/bin/VBoxManage guestproperty set default /VirtualBox/GuestAdd/SharedFolders/MountPrefix /
STDOUT:
STDERR:
executing: /usr/local/bin/VBoxManage guestproperty set default /VirtualBox/GuestAdd/SharedFolders/MountDir /
STDOUT:
STDERR:
executing: /usr/local/bin/VBoxManage sharedfolder add default --name Users --hostpath /Users --automount
STDOUT:
STDERR:
executing: /usr/local/bin/VBoxManage setextradata default VBoxInternal2/SharedFoldersEnableSymlinksCreate/Users 1
STDOUT:
STDERR:
Starting VirtualBox VM...
executing: /usr/local/bin/VBoxManage showvminfo default --machinereadable
STDOUT: name="default"
groups="/"
ostype="Linux 2.6 / 3.x / 4.x (64-bit)"
UUID="e0f2a54b-b11a-47e2-9f3e-450f6fea78c8"
CfgFile="/Users/arbi/.docker/machine/machines/default/default/default.vbox"
SnapFldr="/Users/arbi/.docker/machine/machines/default/default/Snapshots"
LogFldr="/Users/arbi/.docker/machine/machines/default/default/Logs"
hardwareuuid="e0f2a54b-b11a-47e2-9f3e-450f6fea78c8"
memory=2048
. . .
SharedFolderNameMachineMapping1="Users"
SharedFolderPathMachineMapping1="/Users"
STDERR:
using 192.168.99.1 for dhcp address
executing: /usr/local/bin/VBoxManage list hostonlyifs
STDOUT: Name: vboxnet0
GUID: 786f6276-656e-4074-8000-0a0027000000
DHCP: Disabled
IPAddress: 192.168.99.1
NetworkMask: 255.255.255.0
IPV6Address:
IPV6NetworkMaskPrefixLength: 0
HardwareAddress: 0a:00:27:00:00:00
MediumType: Ethernet
Status: Down
VBoxNetworkName: HostInterfaceNetworking-vboxnet0
STDERR:
executing: /usr/local/bin/VBoxManage modifyvm default --nic2 hostonly --nictype2 82540EM --hostonlyadapter2 vboxnet0 --cableconnected2 on
STDOUT:
STDERR:
executing: /usr/local/bin/VBoxManage modifyvm default --natpf1 delete ssh
STDOUT:
STDERR: VBoxManage: error: Code NS_ERROR_INVALID_ARG (0x80070057) - Invalid argument value (extended info not available)
VBoxManage: error: Context: "RemoveRedirect(Bstr(ValueUnion.psz).raw())" at line 1766 of file VBoxManageModifyVM.cpp
executing: /usr/local/bin/VBoxManage modifyvm default --natpf1 ssh,tcp,127.0.0.1,52532,,22
STDOUT:
STDERR:
executing: /usr/local/bin/VBoxManage startvm default --type headless
STDOUT: Waiting for VM "default" to power on...
VM "default" has been successfully started.
STDERR:
Error creating machine: exit status 1
You will want to check the provider to make sure the machine and associated resources were properly removed.
Starting machine default...
exit status 1
Started machines may have new IP addresses. You may need to re-run the `docker-machine env` command.
Setting environment variables for machine default...
host is not running
docker is configured to use the default machine with IP
For help getting started, check out the docs at https://docs.docker.com
default is not running. Please start this with docker-machine start default
When I try to start the machine manually, it fails to run again:
$ docker-machine create --driver virtualbox default
Creating VirtualBox VM...
Creating SSH key...
Starting VirtualBox VM...
Error creating machine: exit status 1
You will want to check the provider to make sure the machine and associated resources were properly removed.
But then when I open VirtualBox, I see the default machine powered off, If I try to start it manually it fails and I get the following error:
Failed to open a session for the virtual machine default.
Failed to load VMMR0.r0 (VERR_VMM_SMAP_BUT_AC_CLEAR).
Result Code: NS_ERROR_FAILURE (0x80004005)
Component: ConsoleWrap
Interface: IConsole {872da645-4a9b-1727-bee2-5585105b9eed}
Any idea why it's failing to start the default machine?

I had to downgrade to VirtualBox 4.3 to make the Docker host starts successfully.

Uninstall VirtualBox:
To uninstall VirtualBox, open the disk image (dmg) file again and double-click on the uninstall icon contained therein.
Then re-install the Docker toolbox.

Related

Failed to connect to containerd: failed to dial

Just installed Docker CE following official instructions with the repository in Ubuntu 14.04
Installation went successfully, the daemon is running
$ ps aux | grep docker
[...] /usr/bin/dockerd --raw-logs [...]
My user is in the docker group:
$ groups
[...] docker
The cli can't seem to communicate (same with sudo)
$ docker ps
Cannot connect to the Docker daemon at unix:///var/run/docker.sock.
Is the docker daemon running?
The socket seems to have the correct permissions:
$ ls -l /var/run/docker.sock
srw-rw---- 1 root docker 0 Feb 4 16:21 /var/run/docker.sock
The log seems to claim about some issues though
$ sudo tail -f /var/log/upstart/docker.log
Failed to connect to containerd: failed to dial "/var/run/docker/containerd/docker-containerd.sock": dial unix:///var/run/docker/containerd/docker-containerd.sock: timeout
/var/run/docker.sock is up
time="2018-02-04T16:22:21.031459040+01:00" level=info msg="libcontainerd: started new docker-containerd process" pid=17147
INFO[0000] starting containerd module=containerd revision=89623f28b87a6004d4b785663257362d1658a729 version=v1.0.0
INFO[0000] setting subreaper... module=containerd
containerd: invalid argument
time="2018-02-04T16:22:21.056685023+01:00" level=error msg="containerd did not exit successfully" error="exit status 1" module=libcontainerd
Any advice to make this work ?
Relog and Docker restart already done of course
As #bobbear suggested and is actually mentioned in the official doc one of the prerequisites is:
Version 3.10 or higher of the Linux kernel. The latest version of the kernel available for you platform is recommended.
After having checked my Kernel version:
$ uname -a
Linux [...] 3.2.[...]-generic [...]-Ubuntu [...] x86_64
I searched for candidates:
$ apt-cache search linux-image
And installed my new_kernel:
$ sudo apt-get install \
linux-image-new_kernel \
linux-headers-new_kernel \
linux-image-extra-new_kernel
Same situation happend on me. IS because your linux kernel version too low !!! check it use command "uname -r" , if the version below "3.10" (for example: debian 7 whezzy default version is 3.2 ) ,even you install docker-ce suceessfully, you will still can not start docker daemon success.That why! All most answers on the web tell you to 'restart' bla bla bla... but they did not consider this problem.

Remote LLDB debugging - Docker container

I'm trying to set up a remote debugging with LLDB 4.0.1.
There's a docker (17.06.0-ce) container with Arch linux.
Docker container is set in privileged mode, so now LLDB can be started in container.
Container contains core_service which is Rust executable.
Commands run inside container
(lldb) target create target/debug/core_service
Current executable set to 'target/debug/core_service' (x86_64).
(lldb) process launch
Process 182 launched: '/srv/core_service/target/debug/core_service' (x86_64)
Problem exists with remote debugging, lldb-server is started inside container with lldb-server platform --server --listen 0.0.0.0:1234.
I can connect from host lldb to container lldb-server, but I can't attach/create processes.
Commands run on host (lldb-server in container = localhost:1234)
(lldb) platform select remote-linux
Platform: remote-linux
Connected: no
(lldb) platform connect connect://localhost:1234
Platform: remote-linux
Triple: x86_64-*-linux-gnu
OS Version: 4.12.4 (4.12.4-1-ARCH)
Kernel: #1 SMP PREEMPT Fri Jul 28 18:54:18 UTC 2017
Hostname: 099bd76c07c9
Connected: yes
WorkingDir: /srv/core_service
(lldb) target create target/debug/core_service
Current executable set to 'target/debug/core_service' (x86_64).
(lldb) process launch
error: connect remote failed (Connection refused)
error: process launch failed: Connection refused
How can I fix it? Are there any docker, arch linux settings that would cause this error?
It seems, like there's some problem with lldb-server permissions in docker container.
Commands run on host (lldb-server in container)
(lldb) platform shell ps -A
PID TTY TIME CMD
1 ? 00:00:00 bash
9 ? 00:00:00 nginx
10 ? 00:00:00 nginx
11 ? 00:00:00 lldb-server
25 ? 00:00:00 core_service
59 ? 00:00:00 lldb-server
68 ? 00:00:00 ps
(lldb) platform shell kill -9 25
(lldb) platform process launch target/debug/core_service
error: connect remote failed (Connection refused)
error: Connection refused
(lldb) platform process launch anything
error: connect remote failed (Connection refused)
error: Connection refused
But I can't figure out what can it be. lldb-server is run as root in container, I can execute shell commands using lldb.
There is needed both platform port (1234 in your case) and gdbserver port (randomly generated by default). You can enforce the gdbserver port by lldb-server option --gdbserver-port.
Tested on Fedora 29 x86_64:
docker run --privileged -p 5000:5000 -p 5001:5001 fedora bash -c 'dnf -y install lldb;lldb-server platform --server --listen 0.0.0.0:5000 --gdbserver-port 5001'
and
echo 'int main(){}' >main.c;gcc -g -o main main.c;lldb -o 'platform select remote-linux' -o 'platform connect connect://localhost:5000' -o "target create ./main" -o 'b main' -o 'process launch'
(lldb) process launch
Process 45 stopped
* thread #1, name = 'main', stop reason = breakpoint 1.1
frame #0: 0x000000000040110f main`main at main.c:1
-> 1 int main(){}
Process 45 launched: '/root/main' (x86_64)
(lldb) _
This may be because the server cannot see any process on the host. It is still wrapped in its own PID namespace. When you launch the LLDB server, use a host pid name space
docker run --pid=host --privileged <yourimage>
Hopefully this will allow your container see all the host processes

dockerd: Error running deviceCreate (CreatePool) dm_task_run failed

I'm building some CentOS VM with VMWare, with no access to internet, so I've downloaded and made local repositories, including this one
Then I have installed docker-engine.x86_64, and when starting the docker daemon, I get the following errors :
[root]# dockerd
DEBU[0000] docker group found. gid: 993
...
...
DEBU[0001] Error retrieving the next available loopback: open /dev/loop-control: no such device
ERRO[0001] **There are no more loopback devices available.**
ERRO[0001] [graphdriver] prior storage driver "devicemapper" failed: loopback attach failed
DEBU[0001] Cleaning up old mountid : start.
FATA[0001] Error starting daemon: error initializing graphdriver: loopback attach failed
After manually add the loop module which control loop device with this command :
insmod /lib/modules/3.10.0-327.36.2.el7.x86_64/kernel/drivers/block/loop.ko
The error changes to :
[graphdriver] prior storage driver "devicemapper" failed: devicemapper: Error running deviceCreate (CreatePool) dm_task_run failed
I've read that it could be because I have not enough space disk, I think it's not that, any idea?
[root]# df -k .
Filesystem blocs de 1K Used Available Used Mounted on
/dev/mapper/centos-root 51887356 2436256 49451100 5% /
I got the "There are no more loopback devices available" error, which stopped dockerd from running.
I fixed it by ensuring the storage driver was 'overlay':
# /usr/bin/dockerd -D --storage-driver=overlay
This was on Debian Jessie and docker running as a systemd service/unit.
To make it permanent, I created a systemd drop-in:
$ cat /etc/systemd/system/docker.service.d/docker.conf
[Service]
ExecStart=
ExecStart=/usr/bin/dockerd -H fd:// --storage-driver=overlay

Unable to start Docker Service in Ubuntu 16.04

I've been trying to use Docker (1.10) on Ubuntu 16.04 but installation fails because Docker Service doesn't start.
I've already tried to install docker by docker.io, docker-engine apt packages and curl -sSL https://get.docker.com/ | sh but it doesn't work.
My Host info is:
Linux Xenial 4.5.3-040503-generic #201605041831 SMP Wed May 4 22:33:50 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
Here is systemctl status docker.service:
● docker.service - Docker Application Container Engine
Loaded: loaded (/lib/systemd/system/docker.service; enabled; vendor preset: enabled)
Active: failed (Result: exit-code) since sáb 2016-05-14 15:17:31 CEST; 12min ago
Docs: https://docs.docker.com
Process: 22479 ExecStart=/usr/bin/docker daemon -H fd:// (code=exited, status=1/FAILURE)
Main PID: 22479 (code=exited, status=1/FAILURE)
may 14 15:17:30 Xenial docker[22479]: time="2016-05-14T15:17:30.103601523+02:00" level=info msg="New containerd process, pid: 22485\n"
may 14 15:17:31 Xenial docker[22479]: time="2016-05-14T15:17:31.149064723+02:00" level=error msg="devmapper: Unable to delete device: devicemapper: Can't set task name /dev/mapper/docker-8:6-2101297-pool"
may 14 15:17:31 Xenial docker[22479]: time="2016-05-14T15:17:31.149127439+02:00" level=warning msg="devmapper: Usage of loopback devices is strongly discouraged for production use. Please use `--storage-opt dm.thinpooldev` or use `man docker` to refer to dm.thinpooldev section."
may 14 15:17:31 Xenial docker[22479]: time="2016-05-14T15:17:31.153010028+02:00" level=error msg="[graphdriver] prior storage driver \"devicemapper\" failed: devicemapper: Can't set task name /dev/mapper/docker-8:6-2101297-pool"
may 14 15:17:31 Xenial docker[22479]: time="2016-05-14T15:17:31.153130839+02:00" level=fatal msg="Error starting daemon: error initializing graphdriver: devicemapper: Can't set task name /dev/mapper/docker-8:6-2101297-pool"
may 14 15:17:31 Xenial systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
may 14 15:17:31 Xenial docker[22479]: time="2016-05-14T15:17:31+02:00" level=info msg="stopping containerd after receiving terminated"
may 14 15:17:31 Xenial systemd[1]: Failed to start Docker Application Container Engine.
may 14 15:17:31 Xenial systemd[1]: docker.service: Unit entered failed state.
may 14 15:17:31 Xenial systemd[1]: docker.service: Failed with result 'exit-code'.
Here is sudo docker daemon -D
DEBU[0000] docker group found. gid: 999
DEBU[0000] Listener created for HTTP on unix (/var/run/docker.sock)
INFO[0000] previous instance of containerd still alive (23050)
DEBU[0000] containerd connection state change: CONNECTING
DEBU[0000] Using default logging driver json-file
DEBU[0000] Golang's threads limit set to 55980
DEBU[0000] received past containerd event: &types.Event{Type:"live", Id:"", Status:0x0, Pid:"", Timestamp:0x57372cae}
DEBU[0000] containerd connection state change: READY
DEBU[0000] devicemapper: driver version is 4.34.0
DEBU[0000] devmapper: Generated prefix: docker-8:6-2101297
DEBU[0000] devmapper: Checking for existence of the pool docker-8:6-2101297-pool
DEBU[0000] devmapper: poolDataMajMin=7:0 poolMetaMajMin=7:1
DEBU[0000] devmapper: Major:Minor for device: /dev/loop0 is:7:0
DEBU[0000] devmapper: Major:Minor for device: /dev/loop1 is:7:1
DEBU[0000] devmapper: loadDeviceFilesOnStart()
DEBU[0000] devmapper: Skipping file /var/lib/docker/devicemapper/metadata/transaction-metadata
DEBU[0000] devmapper: loadDeviceFilesOnStart() END
DEBU[0000] devmapper: constructDeviceIDMap()
DEBU[0000] devmapper: constructDeviceIDMap() END
DEBU[0000] devmapper: Rolling back open transaction: TransactionID=1 hash= device_id=1
ERRO[0000] devmapper: Unable to delete device: devicemapper: Can't set task name /dev/mapper/docker-8:6-2101297-pool
WARN[0000] devmapper: Usage of loopback devices is strongly discouraged for production use. Please use `--storage-opt dm.thinpooldev` or use `man docker` to refer to dm.thinpooldev section.
DEBU[0000] devmapper: Initializing base device-mapper thin volume
DEBU[0000] devicemapper: CreateDevice(poolName=/dev/mapper/docker-8:6-2101297-pool, deviceID=1)
DEBU[0000] devmapper: Error creating device: devicemapper: Can't set task name /dev/mapper/docker-8:6-2101297-pool
DEBU[0000] devmapper: Error device setupBaseImage: devicemapper: Can't set task name /dev/mapper/docker-8:6-2101297-pool
ERRO[0000] [graphdriver] prior storage driver "devicemapper" failed: devicemapper: Can't set task name /dev/mapper/docker-8:6-2101297-pool
DEBU[0000] Cleaning up old mountid : start.
FATA[0000] Error starting daemon: error initializing graphdriver: devicemapper: Can't set task name /dev/mapper/docker-8:6-2101297-pool
Here is ./check-config.sh output:
warning: /proc/config.gz does not exist, searching other paths for kernel config ...
info: reading kernel config from /boot/config-4.5.3-040503-generic ...
Generally Necessary:
- cgroup hierarchy: properly mounted [/sys/fs/cgroup]
- apparmor: enabled and tools installed
- CONFIG_NAMESPACES: enabled
- CONFIG_NET_NS: enabled
- CONFIG_PID_NS: enabled
- CONFIG_IPC_NS: enabled
- CONFIG_UTS_NS: enabled
- CONFIG_DEVPTS_MULTIPLE_INSTANCES: enabled
- CONFIG_CGROUPS: enabled
- CONFIG_CGROUP_CPUACCT: enabled
- CONFIG_CGROUP_DEVICE: enabled
- CONFIG_CGROUP_FREEZER: enabled
- CONFIG_CGROUP_SCHED: enabled
- CONFIG_CPUSETS: enabled
- CONFIG_MEMCG: enabled
- CONFIG_KEYS: enabled
- CONFIG_MACVLAN: enabled (as module)
- CONFIG_VETH: enabled (as module)
- CONFIG_BRIDGE: enabled (as module)
- CONFIG_BRIDGE_NETFILTER: enabled (as module)
- CONFIG_NF_NAT_IPV4: enabled (as module)
- CONFIG_IP_NF_FILTER: enabled (as module)
- CONFIG_IP_NF_TARGET_MASQUERADE: enabled (as module)
- CONFIG_NETFILTER_XT_MATCH_ADDRTYPE: enabled (as module)
- CONFIG_NETFILTER_XT_MATCH_CONNTRACK: enabled (as module)
- CONFIG_NF_NAT: enabled (as module)
- CONFIG_NF_NAT_NEEDED: enabled
- CONFIG_POSIX_MQUEUE: enabled
Optional Features:
- CONFIG_USER_NS: enabled
- CONFIG_SECCOMP: enabled
- CONFIG_CGROUP_PIDS: enabled
- CONFIG_MEMCG_KMEM: missing
- CONFIG_MEMCG_SWAP: enabled
- CONFIG_MEMCG_SWAP_ENABLED: missing
(note that cgroup swap accounting is not enabled in your kernel config, you can enable it by setting boot option "swapaccount=1")
- CONFIG_BLK_CGROUP: enabled
- CONFIG_BLK_DEV_THROTTLING: enabled
- CONFIG_IOSCHED_CFQ: enabled
- CONFIG_CFQ_GROUP_IOSCHED: enabled
- CONFIG_CGROUP_PERF: enabled
- CONFIG_CGROUP_HUGETLB: enabled
- CONFIG_NET_CLS_CGROUP: enabled (as module)
- CONFIG_CGROUP_NET_PRIO: enabled
- CONFIG_CFS_BANDWIDTH: enabled
- CONFIG_FAIR_GROUP_SCHED: enabled
- CONFIG_RT_GROUP_SCHED: missing
- CONFIG_EXT3_FS: missing
- CONFIG_EXT3_FS_XATTR: missing
- CONFIG_EXT3_FS_POSIX_ACL: missing
- CONFIG_EXT3_FS_SECURITY: missing
(enable these ext3 configs if you are using ext3 as backing filesystem)
- CONFIG_EXT4_FS: enabled
- CONFIG_EXT4_FS_POSIX_ACL: enabled
- CONFIG_EXT4_FS_SECURITY: enabled
- Network Drivers:
- "overlay":
- CONFIG_VXLAN: enabled (as module)
- Storage Drivers:
- "aufs":
- CONFIG_AUFS_FS: missing
- "btrfs":
- CONFIG_BTRFS_FS: enabled (as module)
- "devicemapper":
- CONFIG_BLK_DEV_DM: enabled
- CONFIG_DM_THIN_PROVISIONING: enabled (as module)
- "overlay":
- CONFIG_OVERLAY_FS: enabled (as module)
- "zfs":
- /dev/zfs: missing
- zfs command: missing
- zpool command: missing
If someone could please help me I would be very thankful
Update
It seems that in newer versions of docker and Ubuntu the unit file for docker is simply masked (pointing to /dev/null).
You can verify it by running the following commands in the terminal:
sudo file /lib/systemd/system/docker.service
sudo file /lib/systemd/system/docker.socket
You should see that the unit file symlinks to /dev/null.
In this case, all you have to do is follow S34N's suggestion, and run:
sudo systemctl unmask docker.service
sudo systemctl unmask docker.socket
sudo systemctl start docker.service
sudo systemctl status docker
I'll also keep the original post, that answers the error log stating that the storage driver should be replaced:
Original Post
I had the same problem, and I tried fixing it with Salva Cort's suggestion, but printing /etc/default/docker says:
# THIS FILE DOES NOT APPLY TO SYSTEMD
So here's a permanent fix that works for systemd (Ubuntu 15.04 and higher):
create a new file /etc/systemd/system/docker.service.d/overlay.conf with the following content:
[Service]
ExecStart=
ExecStart=/usr/bin/docker daemon -H fd:// -s overlay
flush changes by executing:
sudo systemctl daemon-reload
verify that the configuration has been loaded:
systemctl show --property=ExecStart docker
restart docker:
sudo systemctl restart docker
The following unmasking commands worked for me (Ubuntu 18). Hope it helps someone out there... :-)
sudo systemctl unmask docker.service
sudo systemctl unmask docker.socket
sudo systemctl start docker.service
I had the same problem after upgrade docker from 17.05-ce to 17.06-ce via docker-machine
Update /etc/systemd/system/docker.service.d/10-machine.conf
replace
`docker daemon` => `dockerd`
example from
[Service]
ExecStart=
ExecStart=/usr/bin/docker deamon -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --storage-driver aufs --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=generic
Environment=
to
[Service]
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --storage-driver aufs --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=generic
Environment=
flush changes by executing:
sudo systemctl daemon-reload
restart docker:
sudo systemctl restart docker
Well, finally I fixed it
Everything you have to do is to load a different storage-driver in my case I will use overlay:
Disable Docker service: sudo systemctl stop docker.service
Start Docker Daemon (overlay driver): sudo docker daemon -s overlay
Run Demo container: sudo docker run hello-world
In order to make these changes permanent, you must edit /etc/default/docker file and add the option:
DOCKER_OPTS="-s overlay"
Next time Docker service get loaded, it will run docker daemon -s overlay
I've been able to get it working after a kernel upgrade by following the directions in this blog.
https://mymemorysucks.wordpress.com/2016/03/31/docker-graphdriver-and-aufs-failed-driver-not-supported-error-after-ubuntu-upgrade/
sudo apt-get update
sudo apt-get install linux-image-extra-$(uname -r) linux-image-extra-virtual
sudo modprobe aufs
sudo service docker restart
After viewing some of the other answers it looks like the issue was that the service wasn't running with the -s overlay options.
I also happened to notice that docker tried to start up with ${DOCKER_OPTS} at the end of the call.
I was able to export DOCKER_OPTS="-s overlay" (bc by default DOCKER_OPTS was empty) and get docker running.
I had a similar issue on a new Docker installation (version 19.03.3-rc1) on Ubuntu 18.04.3 LTS. By default /etc/docker/daemon.json file does not exist on a new installation. Following a tutorial I changed the storage driver to devicemapper by creating a new daemon.json file. It worked but then I deleted the daemon.json file thinking that it would revert to the default but that did not work and the service would not start.
Creating the /etc/docker/daemon.json file again with the default storage driver fixed it for me.
{
"storage-driver": "overlay2"
}
sudo dockerd --debug will help to fix actual pain point I fixed the same error using this at ubuntu 20 LTS
As to me, I have get this error.
Job for docker.service failed because the control process exited with error code. See "systemctl status docker.service" and "journalctl -xe" for details.
Finally I found, it the /etc/docker/daemon.json error, for I add registry-mirrors
{
"runtimes": {
"nvidia": {
"path": "/usr/bin/nvidia-container-runtime",
"runtimeArgs": []
}
}
# I forget to add a comma , here !!!!!!!
"registry-mirrors": ["https://docker.mirrors.ustc.edu.cn"]
}
After I add it , then systemctl restart docker, I solved it.
In my case I was getting the following error from journalctl -xe command
unable to configure the Docker daemon with file /etc/docker/daemon.json: invalid character 'â' looking for beginning of object key string
Just clean /etc/docker/daemon.json with
{
}
I had this issue today after an upgrade to the ubuntu kernel and tried numerous solutions above. However the only one that worked (Ubuntu 16.04.6 LTS) was to remove (or rename) the folder: /var/lib/docker
Please be aware, this will remove all your docker images, containers and volumes etc. So understand the implications before applying or take a backup!
There are more details here:
https://github.com/docker/for-linux/issues/162

Vagrant refusing to start (Remote connection disconnect)

Vagrant refuses to start after making some changes to networking, I was getting the following;
$ vagrant up
default: Warning: Connection timeout. Retrying...
default: Warning: Remote connection disconnect. Retrying...
default: Warning: Remote connection disconnect. Retrying...
default: Warning: Remote connection disconnect. Retrying...
default: Warning: Remote connection disconnect. Retrying...
I tried to fix this by restarting the service (which failed), which then resulted in this;
$ vagrant up
Bringing machine 'default' up with 'virtualbox' provider...
==> default: Checking if box 'ubuntu/trusty64' is up to date...
==> default: Clearing any previously set network interfaces...
There was an error while executing `VBoxManage`, a CLI used by Vagrant
for controlling VirtualBox. The command and stderr is shown below.
Command: ["hostonlyif", "create"]
Stderr: 0%...
Progress state: NS_ERROR_FAILURE
VBoxManage: error: Failed to create the host-only adapter
VBoxManage: error: VBoxNetAdpCtl: Error while adding new interface: failed to open /dev/vboxnetctl: No such file or directory
VBoxManage: error: Details: code NS_ERROR_FAILURE (0x80004005), component HostNetworkInterface, interface IHostNetworkInterface
VBoxManage: error: Context: "int handleCreate(HandlerArg*, int, int*)" at line 66 of file VBoxManageHostonly.cpp
Others recommended restarting VirtualBox service, but this also failed;
✗ sudo "/Library/Application Support/VirtualBox/LaunchDaemons/VirtualBoxStartup.sh" restart
Unloading VBoxDrv.kext
(kernel) Can't remove kext org.virtualbox.kext.VBoxDrv; services failed to terminate - 0xe00002c7.
Failed to unload org.virtualbox.kext.VBoxDrv - (iokit/common) unsupported function.
Error: Failed to unload VBoxDrv.kext
Fatal error: VirtualBox
After much digging, it appears the restart command was failing due to VirtualBox processes holding locks.
This was fixed by doing;
# kill all virtualbox related processes
$ ps aux | grep vbox -i | awk -F ' ' '{print $2}' | xargs
# restart virtualbox service
$ sudo "/Library/Application Support/VirtualBox/LaunchDaemons/VirtualBoxStartup.sh" restart
# try again
$ vagrant up
This worked for me..
make sure you have enable adapter 2 on virtualbox

Resources