Initially minikube used to run on the same machine.
Now when I am starting the minikube start command it is not starting tried everything but still the same.
Here are the logs of minikube start
[Company\sainath.reddy#hostm repos]$ minikube start
๐ minikube v1.26.0 on Amazon 2 (xen/amd64)
โจ Using the docker driver based on existing profile
๐ Starting control plane node minikube in cluster minikube
๐ Pulling base image ...
๐ Updating the running docker "minikube" container ...
โ This container is having trouble accessing https://k8s.gcr.io
๐ก To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
๐ณ Preparing Kubernetes v1.24.1 on Docker 20.10.17 ...
โช kubelet.cgroup-driver=systemd
๐คฆ Unable to restart cluster, will reset it: apiserver healthz: apiserver process never appeared
โช Generating certificates and keys ...
โช Booting up control plane ...
๐ข initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.24.1
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
KERNEL_VERSION: 4.14.285-215.501.amzn2.x86_64
OS: Linux
CGROUPS_CPU: enabled
CGROUPS_CPUACCT: enabled
CGROUPS_CPUSET: enabled
CGROUPS_DEVICES: enabled
CGROUPS_FREEZER: enabled
CGROUPS_MEMORY: enabled
CGROUPS_PIDS: enabled
CGROUPS_HUGETLB: enabled
CGROUPS_BLKIO: enabled
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all running Kubernetes containers by using crictl:
- 'crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock logs CONTAINERID'
stderr:
W0723 18:18:12.089975 49249 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
[WARNING FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables does not exist
[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/4.14.285-215.501.amzn2.x86_64\n", err: exit status 1
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
โช Generating certificates and keys ...
โช Booting up control plane ...
๐ฃ Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.24.1
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
KERNEL_VERSION: 4.14.285-215.501.amzn2.x86_64
OS: Linux
CGROUPS_CPU: enabled
CGROUPS_CPUACCT: enabled
CGROUPS_CPUSET: enabled
CGROUPS_DEVICES: enabled
CGROUPS_FREEZER: enabled
CGROUPS_MEMORY: enabled
CGROUPS_PIDS: enabled
CGROUPS_HUGETLB: enabled
CGROUPS_BLKIO: enabled
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all running Kubernetes containers by using crictl:
- 'crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock logs CONTAINERID'
stderr:
W0723 18:22:16.760474 50480 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
[WARNING FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables does not exist
[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/4.14.285-215.501.amzn2.x86_64\n", err: exit status 1
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
โญโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฎ
โ โ
โ ๐ฟ If the above advice does not help, please let us know: โ
โ ๐ https://github.com/kubernetes/minikube/issues/new/choose โ
โ โ
โ Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. โ
โ โ
โฐโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฏ
โ Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.24.1
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
KERNEL_VERSION: 4.14.285-215.501.amzn2.x86_64
OS: Linux
CGROUPS_CPU: enabled
CGROUPS_CPUACCT: enabled
CGROUPS_CPUSET: enabled
CGROUPS_DEVICES: enabled
CGROUPS_FREEZER: enabled
CGROUPS_MEMORY: enabled
CGROUPS_PIDS: enabled
CGROUPS_HUGETLB: enabled
CGROUPS_BLKIO: enabled
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all running Kubernetes containers by using crictl:
- 'crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock logs CONTAINERID'
stderr:
W0723 18:22:16.760474 50480 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
[WARNING FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables does not exist
[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/4.14.285-215.501.amzn2.x86_64\n", err: exit status 1
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
๐ก Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
๐ฟ Related issue: https://github.com/kubernetes/minikube/issues/4172
[Company\sainath.reddy#host repos]$
I have tried minikube start --extra-config=kubelet.cgroup-driver=systemd but still the same.
I am running the minikube on linux machine which is amazon linux 2
Here is the logs when I check kubelet service.
[Company\sainath.reddy#host ~]$ systemctl status kubelet.service
โ kubelet.service - kubelet: The Kubernetes Node Agent
Loaded: loaded (/usr/lib/systemd/system/kubelet.service; enabled; vendor preset: disabled)
Drop-In: /usr/lib/systemd/system/kubelet.service.d
โโ10-kubeadm.conf
Active: activating (auto-restart) (Result: exit-code) since Mon 2022-07-25 12:28:51 IST; 2s ago
Docs: https://kubernetes.io/docs/
Process: 744434 ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS (code=exited, status=1/FAILURE)
Main PID: 744434 (code=exited, status=1/FAILURE)
[Company\sainath.reddy#host ~]$
Error message is, (you can search for this error keyword K8S_KUBELET_NOT_RUNNING):
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
Kubelet has to be up for running. Check it with these:
systemctl status kubelet
journalctl -xeu kubelet
Steps:
Is systemd installed on your machine?
check it with this:
rpm -qa | grep -i systemd
else install with,
yum install -y /usr/bin/systemctl; systemctl --version
Try updating the cgroup driver in a /etc/docker/daemon.json file (create if it is not there!):
{
"exec-opts": ["native.cgroupdriver=systemd"]
}
Restart the services: (use sudo where needed)
systemctl daemon-reload
systemctl restart docker.service
systemctl restart kubelet
After this if kubelet is running, then issue resolved.
And after this if you face issues with swap, you can swap off with instructions here.
Related
Version details
OS: Ubuntu 18.04.5 LTS
aziot-edge: bionic,now 1.2.3-1 amd64
aziot-identity-service: bionic,now 1.2.2-1 amd64
docker: Docker version 20.10.8+azure, build 3967b7d28e15a020e4ee344283128ead633b3e0c
Verifying the installation shows that the aziot-identityd is in "Down-activating" state
# sudo iotedge system status
System services:
aziot-edged Running
aziot-identityd Down - activating
aziot-keyd Running
aziot-certd Running
aziot-tpmd Ready
aziot-identityd is in a bad state because:
aziot-identityd.service: Down - activating : Printing the last 10 log lines.
-- Logs begin at Fri 2020-11-06 12:29:56 IST, end at Fri 2021-09-10 19:07:13 IST. --
Sep 10 19:07:10 vm-DevIoTEdge1-poc-CentIN aziot-identityd[1871]: 2021-09-10T13:37:10Z [INFO] - Could not reconcile Identities with current device data. Reprovisioning.
Sep 10 19:07:10 vm-DevIoTEdge1-poc-CentIN aziot-identityd[1871]: 2021-09-10T13:37:10Z [INFO] - Updated device info for Edge1.
Sep 10 19:07:10 vm-DevIoTEdge1-poc-CentIN aziot-identityd[1871]: 2021-09-10T13:37:10Z [ERR!] - Failed to provision with IoT Hub, and no valid device backup was found: Hub client error
Sep 10 19:07:10 vm-DevIoTEdge1-poc-CentIN aziot-identityd[1871]: 2021-09-10T13:37:10Z [ERR!] - service encountered an error
Sep 10 19:07:10 vm-DevIoTEdge1-poc-CentIN aziot-identityd[1871]: 2021-09-10T13:37:10Z [ERR!] - caused by: Hub client error
Sep 10 19:07:10 vm-DevIoTEdge1-poc-CentIN aziot-identityd[1871]: 2021-09-10T13:37:10Z [ERR!] - caused by: internal error
Sep 10 19:07:10 vm-DevIoTEdge1-poc-CentIN aziot-identityd[1871]: 2021-09-10T13:37:10Z [ERR!] - 0: <unknown>
Sep 10 19:07:10 vm-DevIoTEdge1-poc-CentIN aziot-identityd[1871]: 1: <unknown>
Sep 10 19:07:10 vm-DevIoTEdge1-poc-CentIN systemd[1]: aziot-identityd.service: Main process exited, code=exited, status=1/FAILURE
Sep 10 19:07:10 vm-DevIoTEdge1-poc-CentIN systemd[1]: aziot-identityd.service: Failed with result 'exit-code'.
iotedge check shows 2 configuration related errors:
# iotedge check --verbose
Configuration checks (aziot-identity-service)
---------------------------------------------
โ keyd configuration is well-formed - OK
โ certd configuration is well-formed - OK
โ tpmd configuration is well-formed - OK
โ identityd configuration is well-formed - OK
โ daemon configurations up-to-date with config.toml - OK
โ identityd config toml file specifies a valid hostname - OK
โ aziot-identity-service package is up-to-date - OK
โ host time is close to reference time - OK
โ preloaded certificates are valid - OK
โ keyd is running - OK
โ certd is running - OK
โ identityd is running - OK
ร read all preloaded certificates from the Certificates Service - Error
could not load cert with ID "aziot-edged-trust-bundle"
Caused by:
parameter "id" has an invalid value
caused by: not found
โ read all preloaded key pairs from the Keys Service - OK
โ ensure all preloaded certificates match preloaded private keys with the same ID - OK
Connectivity checks (aziot-identity-service)
--------------------------------------------
โ host can connect to and perform TLS handshake with iothub AMQP port - OK
โ host can connect to and perform TLS handshake with iothub HTTPS / WebSockets port - OK
โ host can connect to and perform TLS handshake with iothub MQTT port - OK
Configuration checks
--------------------
โ aziot-edged configuration is well-formed - OK
โ configuration up-to-date with config.toml - OK
โ container engine is installed and functional - OK
ร configuration has correct URIs for daemon mgmt endpoint - Error
SocketError - SocketErrorCode (TimedOut) : Operation timed out
One or more errors occurred. (Got bad response: )
caused by: docker returned exit code: 1, stderr = SocketError - SocketErrorCode (TimedOut) : Operation timed out
One or more errors occurred. (Got bad response: )
โ aziot-edge package is up-to-date - OK
โ container time is close to host time - OK
โผ DNS server - Warning
Container engine is not configured with DNS server setting, which may impact connectivity to IoT Hub.
Please see https://aka.ms/iotedge-prod-checklist-dns for best practices.
You can ignore this warning if you are setting DNS server per module in the Edge deployment.
caused by: Could not open container engine config file /etc/docker/daemon.json
caused by: No such file or directory (os error 2)
โ production readiness: container engine - OK
โผ production readiness: logs policy - Warning
Container engine is not configured to rotate module logs which may cause it run out of disk space.
Please see https://aka.ms/iotedge-prod-checklist-logs for best practices.
You can ignore this warning if you are setting log policy per module in the Edge deployment.
caused by: Could not open container engine config file /etc/docker/daemon.json
caused by: No such file or directory (os error 2)
ร production readiness: Edge Agent's storage directory is persisted on the host filesystem - Error
Could not check current state of edgeAgent container
caused by: docker returned exit code: 1, stderr = Error: No such object: edgeAgent
ร production readiness: Edge Hub's storage directory is persisted on the host filesystem - Error
Could not check current state of edgeHub container
caused by: docker returned exit code: 1, stderr = Error: No such object: edgeHub
โ Agent image is valid and can be pulled from upstream - OK
Connectivity checks
-------------------
โ container on the default network can connect to upstream AMQP port - OK
โ container on the default network can connect to upstream HTTPS / WebSockets port - OK
โ container on the default network can connect to upstream MQTT port - OK
โ container on the IoT Edge module network can connect to upstream AMQP port - OK
โ container on the IoT Edge module network can connect to upstream HTTPS / WebSockets port - OK
โ container on the IoT Edge module network can connect to upstream MQTT port - OK
30 check(s) succeeded.
2 check(s) raised warnings.
4 check(s) raised errors.
TOML file has only the manual provisioning with connection string.
I had this error because my IOT Hub networks "Public network access" was set as "Disabled".
You can correct this by going the following:
Go to the Azure portal, and go to the IOT Hub resource in question.
Go to the Networking menu option.
Change the "Public network access" to either "All Networks" or "Selected IP ranges", depending on your use case. Remember if you select "Selected IP ranges", you must add the VM/IOT devices ip address to the list of allowed IP addresses.
I came across this question like too many times while I was working with an enterprise environment. My finding is more related to the environment and security aspect of the whole system.
For my case, my working environment was RedHat Linux and Azure is hosted on-prem with added layer of proxy server. Only one piece of advice to solve most common issues in such environment is to give all necessary permissions of rwx (read, write, all).
Pinpointing the problem asked, the identity daemon is failing because the aziot trust bundle is not loading properly.
read all preloaded certificates from the Certificates Service - Error
could not load cert with ID "aziot-edged-trust-bundle"
Check the certificate is properly setup to use device identity certificate.
Second error is related to daemon management socket:
ร configuration has correct URIs for daemon mgmt endpoint - Error
SocketError - SocketErrorCode (TimedOut) : Operation timed out
One or more errors occurred. (Got bad response: )
caused by: docker returned exit code: 1, stderr = SocketError - SocketErrorCode (TimedOut) : Operation timed out
One or more errors occurred. (Got bad response: )
This can be resolved by manually giving ownership permission to mgmt.sock at /var/lib/iotedge location.
Nevertheless, there may be a variety of reasons for iotedge dps to not work and further iotAgent and iotHub to not start. It is better to go to the root of the issue and start resolving it.
I just installed IoTEdge on Raspberry strech following this:Azure/InternetofThings/IoTEdge/Install or uninstall the Azure IoT Edge runtime
However I get these errors below.
3 weeks I installed another one and it worked perfectly with the same instructions.
pi#raspberrypi:/etc/iotedge $ sudo iotedge check --verbose
Configuration checks
--------------------
โ config.yaml is well-formed - OK
โ config.yaml has well-formed connection string - OK
โ container engine is installed and functional - OK
โ config.yaml has correct hostname - OK
ร config.yaml has correct URIs for daemon mgmt endpoint - Error
One or more errors occurred. (Permission denied /var/run/iotedge/mgmt.sock)
caused by: docker returned exit code: 1, stderr = One or more errors occurred. (Permission denied /var/run/iotedge/mgmt.sock)
โ latest security daemon - OK
โ host time is close to real time - OK
โ container time is close to host time - OK
โผ DNS server - Warning
Container engine is not configured with DNS server setting, which may impact connectivity to IoT Hub.
Please see https://aka.ms/iotedge-prod-checklist-dns for best practices.
You can ignore this warning if you are setting DNS server per module in the Edge deployment.
caused by: Could not open container engine config file /etc/docker/daemon.json
caused by: No such file or directory (os error 2)
โผ production readiness: certificates - Warning
The Edge device is using self-signed automatically-generated development certificates.
They will expire in 89 days (at 2021-02-22 07:24:52 UTC) causing module-to-module and downstream device communication to fail on an active deployment.
After the certs have expired, restarting the IoT Edge daemon will trigger it to generate new development certs.
Please consider using production certificates instead. See https://aka.ms/iotedge-prod-checklist-certs for best practices.
โ production readiness: container engine - OK
โผ production readiness: logs policy - Warning
Container engine is not configured to rotate module logs which may cause it run out of disk space.
Please see https://aka.ms/iotedge-prod-checklist-logs for best practices.
You can ignore this warning if you are setting log policy per module in the Edge deployment.
caused by: Could not open container engine config file /etc/docker/daemon.json
caused by: No such file or directory (os error 2)
โผ production readiness: Edge Agent's storage directory is persisted on the host filesystem - Warning
The edgeAgent module is not configured to persist its /tmp/edgeAgent directory on the host filesystem.
Data might be lost if the module is deleted or updated.
Please see https://aka.ms/iotedge-storage-host for best practices.
ร production readiness: Edge Hub's storage directory is persisted on the host filesystem - Error
Could not check current state of edgeHub container
caused by: docker returned exit code: 1, stderr = Error: No such object: edgeHub
Connectivity checks
-------------------
โ host can connect to and perform TLS handshake with IoT Hub AMQP port - OK
โ host can connect to and perform TLS handshake with IoT Hub HTTPS / WebSockets port - OK
โ host can connect to and perform TLS handshake with IoT Hub MQTT port - OK
โ container on the default network can connect to IoT Hub AMQP port - OK
โ container on the default network can connect to IoT Hub HTTPS / WebSockets port - OK
โ container on the default network can connect to IoT Hub MQTT port - OK
โ container on the IoT Edge module network can connect to IoT Hub AMQP port - OK
โ container on the IoT Edge module network can connect to IoT Hub HTTPS / WebSockets port - OK
โ container on the IoT Edge module network can connect to IoT Hub MQTT port - OK
17 check(s) succeeded.
4 check(s) raised warnings.
2 check(s) raised errors.
iotedge list
pi#raspberrypi:~ $ sudo iotedge list
A module runtime error occurred
caused by: Could not list modules
caused by: connection error: Connection reset by peer (os error 104)
I'm currently on a self-hosted Gitlab 11.9 instance. I have the ability to add a kube cluster to projects on an individual level, but not on a group level (that was introduced in 11.10).
I created a Kubernetes cluster on AWS EKS and successfully connected it to Gitlab's Autodevops for a specific project. I was able to successfully install Helm tiller, Prometheus, and Gitlab Runner. Autodevops was working fine for that project.
Before I discovered that having a cluster run at the group-level was introduced in Gitlab 11.10, I disconnected the kube cluster from the first project and connected it at the group-level. I successfully installed Helm Tiller but failed to install Ingres or Cert-Manager. After I discovered my version doesn't contain group-level autodevops functionality, I connected the cluster to another, different, application and attempted to install Prometheus and Gitlab Runner. However, the operation failed.
My pods are as follows:
% kubectl get pods --namespace=gitlab-managed-apps
NAME READY STATUS RESTARTS AGE
install-prometheus 0/1 Error 0 18h
install-runner 0/1 Error 0 18h
prometheus-kube-state-metrics-8668948654-8p4d5 1/1 Running 0 18h
prometheus-prometheus-server-746bb67956-789ln 2/2 Running 0 18h
runner-gitlab-runner-548ddfd4f4-k5r8s 1/1 Running 0 18h
tiller-deploy-6586b57bcb-p8kdm 1/1 Running 0 18h
Here's some output from my log file:
% kubectl logs install-prometheus --namespace=gitlab-managed-apps --container=helm
+ helm init --upgrade
Creating /root/.helm
Creating /root/.helm/repository
Creating /root/.helm/repository/cache
Creating /root/.helm/repository/local
Creating /root/.helm/plugins
Creating /root/.helm/starters
Creating /root/.helm/cache/archive
Creating /root/.helm/repository/repositories.yaml
Adding stable repo with URL: https://kubernetes-charts.storage.googleapis.com
Adding local repo with URL: http://127.0.0.1:8879/charts
$HELM_HOME has been configured at /root/.helm.
Tiller (the Helm server-side component) has been upgraded to the current version.
Happy Helming!
+ seq 1 30
+ helm version
Client: &version.Version{SemVer:"v2.12.3", GitCommit:"eecf22f77df5f65c823aacd2dbd30ae6c65f186e", GitTreeState:"clean"}
Error: cannot connect to Tiller
+ sleep 1s
Retrying (1)...
+ echo 'Retrying (1)...'
+ helm version
Client: &version.Version{SemVer:"v2.12.3", GitCommit:"eecf22f77df5f65c823aacd2dbd30ae6c65f186e", GitTreeState:"clean"}
Error: cannot connect to Tiller
...
+ sleep 1s
+ echo 'Retrying (30)...'
+ helm upgrade prometheus stable/prometheus --install --reset-values --tls --tls-ca-cert /data/helm/prometheus/config/ca.pem --tls-cert /data/helm/prometheus/config/cert.pem --tls-key /data/helm/prometheus/config/key.pem --version 6.7.3 --set 'rbac.create=false,rbac.enabled=false' --namespace gitlab-managed-apps -f /data/helm/prometheus/config/values.yaml
Retrying (30)...
Error: UPGRADE FAILED: remote error: tls: bad certificate
This cluster doesn't contain anything else except for services, pods, deployments specifically for autodevops. How should I go about 'resetting' the cluster or uninstalling services?
I have a question about kubernetes and network firewall rules. I want to secure my kubernetes cluster with firewall rules, and was wondering if workers/masters need internet access? I'm planning on using a private registry located on my network, but I'm having problems getting it to work when the workers don't have internet access. Here's an example
Name: foo
Namespace: default
Node: worker003/192.168.30.1
Start Time: Mon, 23 Jan 2017 10:33:07 -0500
Labels: <none>
Status: Pending
IP:
Controllers: <none>
Containers:
foo:
Container ID:
Image: registry.company.org/wop_java/app:nginx
Image ID:
Port:
State: Waiting
Reason: ContainerCreating
Ready: False
Restart Count: 0
Volume Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-3cg0w (ro)
Environment Variables: <none>
Conditions:
Type Status
Initialized True
Ready False
PodScheduled True
Volumes:
default-token-3cg0w:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-3cg0w
QoS Class: BestEffort
Tolerations: <none>
Events:
FirstSeen LastSeen Count From SubObjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
5m 5m 1 {default-scheduler } Normal Scheduled Successfully assigned foo to worker003
4m 1m 4 {kubelet worker003} Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "POD" with ErrImagePull: "image pull failed for gcr.io/google_containers/pause-amd64:3.0, this may be because there are no credentials on this request. details: (Error response from daemon: {\"message\":\"Get https://gcr.io/v1/_ping: dial tcp 74.125.192.82:443: i/o timeout\"})"
3m 3s 9 {kubelet worker003} Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "POD" with ImagePullBackOff: "Back-off pulling image \"gcr.io/google_containers/pause-amd64:3.0\""
My question is, does kubernetes require internet access to work? If yes, where is it documented officially?
you need to pass an argument --pod-infra-container-image to a kubelet as documented here: https://kubernetes.io/docs/admin/kubelet/.
It defaults to gcr.io/google_containers/pause-amd64:3.0, which in unsuccessfuly pulled on your machine since gcr.io is unavailable.
You can easily transfer the pause image to you private registry
docker pull gcr.io/google_containers/pause-amd64:3.0
docker tag gcr.io/google_containers/pause-amd64:3.0 REGISTRY.PRIVATE/google_containers/pause-amd64:3.0
docker push REGISTRY.PRIVATE/google_containers/pause-amd64:3.0
# and pass
kubelet --pod-infra-container-image=REGISTRY.PRIVATE/google_containers/pause-amd64:3.0 ...
The pause is a container is created prior your container in order to allocate and keep network and ipc namespaces over restarts.
Kubernetes does not need any internet access for normal operation when all required containers and components are provided by the private repository. A good starting point is the Bare Metal offline provisioning guide.
they do not need Internet access but your not getting access to the private registry your designating. have you looked at https://kubernetes.io/docs/user-guide/images/ it has a couple good options on how to get access to the private registry. https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/ also has some details on it. we do the specifing imagePullSecrets and it works fine
I've been trying to use Docker (1.10) on Ubuntu 16.04 but installation fails because Docker Service doesn't start.
I've already tried to install docker by docker.io, docker-engine apt packages and curl -sSL https://get.docker.com/ | sh but it doesn't work.
My Host info is:
Linux Xenial 4.5.3-040503-generic #201605041831 SMP Wed May 4 22:33:50 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
Here is systemctl status docker.service:
โ docker.service - Docker Application Container Engine
Loaded: loaded (/lib/systemd/system/docker.service; enabled; vendor preset: enabled)
Active: failed (Result: exit-code) since sรกb 2016-05-14 15:17:31 CEST; 12min ago
Docs: https://docs.docker.com
Process: 22479 ExecStart=/usr/bin/docker daemon -H fd:// (code=exited, status=1/FAILURE)
Main PID: 22479 (code=exited, status=1/FAILURE)
may 14 15:17:30 Xenial docker[22479]: time="2016-05-14T15:17:30.103601523+02:00" level=info msg="New containerd process, pid: 22485\n"
may 14 15:17:31 Xenial docker[22479]: time="2016-05-14T15:17:31.149064723+02:00" level=error msg="devmapper: Unable to delete device: devicemapper: Can't set task name /dev/mapper/docker-8:6-2101297-pool"
may 14 15:17:31 Xenial docker[22479]: time="2016-05-14T15:17:31.149127439+02:00" level=warning msg="devmapper: Usage of loopback devices is strongly discouraged for production use. Please use `--storage-opt dm.thinpooldev` or use `man docker` to refer to dm.thinpooldev section."
may 14 15:17:31 Xenial docker[22479]: time="2016-05-14T15:17:31.153010028+02:00" level=error msg="[graphdriver] prior storage driver \"devicemapper\" failed: devicemapper: Can't set task name /dev/mapper/docker-8:6-2101297-pool"
may 14 15:17:31 Xenial docker[22479]: time="2016-05-14T15:17:31.153130839+02:00" level=fatal msg="Error starting daemon: error initializing graphdriver: devicemapper: Can't set task name /dev/mapper/docker-8:6-2101297-pool"
may 14 15:17:31 Xenial systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
may 14 15:17:31 Xenial docker[22479]: time="2016-05-14T15:17:31+02:00" level=info msg="stopping containerd after receiving terminated"
may 14 15:17:31 Xenial systemd[1]: Failed to start Docker Application Container Engine.
may 14 15:17:31 Xenial systemd[1]: docker.service: Unit entered failed state.
may 14 15:17:31 Xenial systemd[1]: docker.service: Failed with result 'exit-code'.
Here is sudo docker daemon -D
DEBU[0000] docker group found. gid: 999
DEBU[0000] Listener created for HTTP on unix (/var/run/docker.sock)
INFO[0000] previous instance of containerd still alive (23050)
DEBU[0000] containerd connection state change: CONNECTING
DEBU[0000] Using default logging driver json-file
DEBU[0000] Golang's threads limit set to 55980
DEBU[0000] received past containerd event: &types.Event{Type:"live", Id:"", Status:0x0, Pid:"", Timestamp:0x57372cae}
DEBU[0000] containerd connection state change: READY
DEBU[0000] devicemapper: driver version is 4.34.0
DEBU[0000] devmapper: Generated prefix: docker-8:6-2101297
DEBU[0000] devmapper: Checking for existence of the pool docker-8:6-2101297-pool
DEBU[0000] devmapper: poolDataMajMin=7:0 poolMetaMajMin=7:1
DEBU[0000] devmapper: Major:Minor for device: /dev/loop0 is:7:0
DEBU[0000] devmapper: Major:Minor for device: /dev/loop1 is:7:1
DEBU[0000] devmapper: loadDeviceFilesOnStart()
DEBU[0000] devmapper: Skipping file /var/lib/docker/devicemapper/metadata/transaction-metadata
DEBU[0000] devmapper: loadDeviceFilesOnStart() END
DEBU[0000] devmapper: constructDeviceIDMap()
DEBU[0000] devmapper: constructDeviceIDMap() END
DEBU[0000] devmapper: Rolling back open transaction: TransactionID=1 hash= device_id=1
ERRO[0000] devmapper: Unable to delete device: devicemapper: Can't set task name /dev/mapper/docker-8:6-2101297-pool
WARN[0000] devmapper: Usage of loopback devices is strongly discouraged for production use. Please use `--storage-opt dm.thinpooldev` or use `man docker` to refer to dm.thinpooldev section.
DEBU[0000] devmapper: Initializing base device-mapper thin volume
DEBU[0000] devicemapper: CreateDevice(poolName=/dev/mapper/docker-8:6-2101297-pool, deviceID=1)
DEBU[0000] devmapper: Error creating device: devicemapper: Can't set task name /dev/mapper/docker-8:6-2101297-pool
DEBU[0000] devmapper: Error device setupBaseImage: devicemapper: Can't set task name /dev/mapper/docker-8:6-2101297-pool
ERRO[0000] [graphdriver] prior storage driver "devicemapper" failed: devicemapper: Can't set task name /dev/mapper/docker-8:6-2101297-pool
DEBU[0000] Cleaning up old mountid : start.
FATA[0000] Error starting daemon: error initializing graphdriver: devicemapper: Can't set task name /dev/mapper/docker-8:6-2101297-pool
Here is ./check-config.sh output:
warning: /proc/config.gz does not exist, searching other paths for kernel config ...
info: reading kernel config from /boot/config-4.5.3-040503-generic ...
Generally Necessary:
- cgroup hierarchy: properly mounted [/sys/fs/cgroup]
- apparmor: enabled and tools installed
- CONFIG_NAMESPACES: enabled
- CONFIG_NET_NS: enabled
- CONFIG_PID_NS: enabled
- CONFIG_IPC_NS: enabled
- CONFIG_UTS_NS: enabled
- CONFIG_DEVPTS_MULTIPLE_INSTANCES: enabled
- CONFIG_CGROUPS: enabled
- CONFIG_CGROUP_CPUACCT: enabled
- CONFIG_CGROUP_DEVICE: enabled
- CONFIG_CGROUP_FREEZER: enabled
- CONFIG_CGROUP_SCHED: enabled
- CONFIG_CPUSETS: enabled
- CONFIG_MEMCG: enabled
- CONFIG_KEYS: enabled
- CONFIG_MACVLAN: enabled (as module)
- CONFIG_VETH: enabled (as module)
- CONFIG_BRIDGE: enabled (as module)
- CONFIG_BRIDGE_NETFILTER: enabled (as module)
- CONFIG_NF_NAT_IPV4: enabled (as module)
- CONFIG_IP_NF_FILTER: enabled (as module)
- CONFIG_IP_NF_TARGET_MASQUERADE: enabled (as module)
- CONFIG_NETFILTER_XT_MATCH_ADDRTYPE: enabled (as module)
- CONFIG_NETFILTER_XT_MATCH_CONNTRACK: enabled (as module)
- CONFIG_NF_NAT: enabled (as module)
- CONFIG_NF_NAT_NEEDED: enabled
- CONFIG_POSIX_MQUEUE: enabled
Optional Features:
- CONFIG_USER_NS: enabled
- CONFIG_SECCOMP: enabled
- CONFIG_CGROUP_PIDS: enabled
- CONFIG_MEMCG_KMEM: missing
- CONFIG_MEMCG_SWAP: enabled
- CONFIG_MEMCG_SWAP_ENABLED: missing
(note that cgroup swap accounting is not enabled in your kernel config, you can enable it by setting boot option "swapaccount=1")
- CONFIG_BLK_CGROUP: enabled
- CONFIG_BLK_DEV_THROTTLING: enabled
- CONFIG_IOSCHED_CFQ: enabled
- CONFIG_CFQ_GROUP_IOSCHED: enabled
- CONFIG_CGROUP_PERF: enabled
- CONFIG_CGROUP_HUGETLB: enabled
- CONFIG_NET_CLS_CGROUP: enabled (as module)
- CONFIG_CGROUP_NET_PRIO: enabled
- CONFIG_CFS_BANDWIDTH: enabled
- CONFIG_FAIR_GROUP_SCHED: enabled
- CONFIG_RT_GROUP_SCHED: missing
- CONFIG_EXT3_FS: missing
- CONFIG_EXT3_FS_XATTR: missing
- CONFIG_EXT3_FS_POSIX_ACL: missing
- CONFIG_EXT3_FS_SECURITY: missing
(enable these ext3 configs if you are using ext3 as backing filesystem)
- CONFIG_EXT4_FS: enabled
- CONFIG_EXT4_FS_POSIX_ACL: enabled
- CONFIG_EXT4_FS_SECURITY: enabled
- Network Drivers:
- "overlay":
- CONFIG_VXLAN: enabled (as module)
- Storage Drivers:
- "aufs":
- CONFIG_AUFS_FS: missing
- "btrfs":
- CONFIG_BTRFS_FS: enabled (as module)
- "devicemapper":
- CONFIG_BLK_DEV_DM: enabled
- CONFIG_DM_THIN_PROVISIONING: enabled (as module)
- "overlay":
- CONFIG_OVERLAY_FS: enabled (as module)
- "zfs":
- /dev/zfs: missing
- zfs command: missing
- zpool command: missing
If someone could please help me I would be very thankful
Update
It seems that in newer versions of docker and Ubuntu the unit file for docker is simply masked (pointing to /dev/null).
You can verify it by running the following commands in the terminal:
sudo file /lib/systemd/system/docker.service
sudo file /lib/systemd/system/docker.socket
You should see that the unit file symlinks to /dev/null.
In this case, all you have to do is follow S34N's suggestion, and run:
sudo systemctl unmask docker.service
sudo systemctl unmask docker.socket
sudo systemctl start docker.service
sudo systemctl status docker
I'll also keep the original post, that answers the error log stating that the storage driver should be replaced:
Original Post
I had the same problem, and I tried fixing it with Salva Cort's suggestion, but printing /etc/default/docker says:
# THIS FILE DOES NOT APPLY TO SYSTEMD
So here's a permanent fix that works for systemd (Ubuntu 15.04 and higher):
create a new file /etc/systemd/system/docker.service.d/overlay.conf with the following content:
[Service]
ExecStart=
ExecStart=/usr/bin/docker daemon -H fd:// -s overlay
flush changes by executing:
sudo systemctl daemon-reload
verify that the configuration has been loaded:
systemctl show --property=ExecStart docker
restart docker:
sudo systemctl restart docker
The following unmasking commands worked for me (Ubuntu 18). Hope it helps someone out there... :-)
sudo systemctl unmask docker.service
sudo systemctl unmask docker.socket
sudo systemctl start docker.service
I had the same problem after upgrade docker from 17.05-ce to 17.06-ce via docker-machine
Update /etc/systemd/system/docker.service.d/10-machine.conf
replace
`docker daemon` => `dockerd`
example from
[Service]
ExecStart=
ExecStart=/usr/bin/docker deamon -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --storage-driver aufs --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=generic
Environment=
to
[Service]
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --storage-driver aufs --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=generic
Environment=
flush changes by executing:
sudo systemctl daemon-reload
restart docker:
sudo systemctl restart docker
Well, finally I fixed it
Everything you have to do is to load a different storage-driver in my case I will use overlay:
Disable Docker service: sudo systemctl stop docker.service
Start Docker Daemon (overlay driver): sudo docker daemon -s overlay
Run Demo container: sudo docker run hello-world
In order to make these changes permanent, you must edit /etc/default/docker file and add the option:
DOCKER_OPTS="-s overlay"
Next time Docker service get loaded, it will run docker daemon -s overlay
I've been able to get it working after a kernel upgrade by following the directions in this blog.
https://mymemorysucks.wordpress.com/2016/03/31/docker-graphdriver-and-aufs-failed-driver-not-supported-error-after-ubuntu-upgrade/
sudo apt-get update
sudo apt-get install linux-image-extra-$(uname -r) linux-image-extra-virtual
sudo modprobe aufs
sudo service docker restart
After viewing some of the other answers it looks like the issue was that the service wasn't running with the -s overlay options.
I also happened to notice that docker tried to start up with ${DOCKER_OPTS} at the end of the call.
I was able to export DOCKER_OPTS="-s overlay" (bc by default DOCKER_OPTS was empty) and get docker running.
I had a similar issue on a new Docker installation (version 19.03.3-rc1) on Ubuntu 18.04.3 LTS. By default /etc/docker/daemon.json file does not exist on a new installation. Following a tutorial I changed the storage driver to devicemapper by creating a new daemon.json file. It worked but then I deleted the daemon.json file thinking that it would revert to the default but that did not work and the service would not start.
Creating the /etc/docker/daemon.json file again with the default storage driver fixed it for me.
{
"storage-driver": "overlay2"
}
sudo dockerd --debug will help to fix actual pain point I fixed the same error using this at ubuntu 20 LTS
As to me, I have get this error.
Job for docker.service failed because the control process exited with error code. See "systemctl status docker.service" and "journalctl -xe" for details.
Finally I found, it the /etc/docker/daemon.json error, for I add registry-mirrors
{
"runtimes": {
"nvidia": {
"path": "/usr/bin/nvidia-container-runtime",
"runtimeArgs": []
}
}
# I forget to add a comma , here !!!!!!!
"registry-mirrors": ["https://docker.mirrors.ustc.edu.cn"]
}
After I add it , then systemctl restart docker, I solved it.
In my case I was getting the following error from journalctl -xe command
unable to configure the Docker daemon with file /etc/docker/daemon.json: invalid character 'รข' looking for beginning of object key string
Just clean /etc/docker/daemon.json with
{
}
I had this issue today after an upgrade to the ubuntu kernel and tried numerous solutions above. However the only one that worked (Ubuntu 16.04.6 LTS) was to remove (or rename) the folder: /var/lib/docker
Please be aware, this will remove all your docker images, containers and volumes etc. So understand the implications before applying or take a backup!
There are more details here:
https://github.com/docker/for-linux/issues/162