Why docker not working in manjaro, connection refuzed - linux

Every action throws an error.
For example:
[greenjoy#greenjoyPC ~]$ docker run hello-world
Unable to find image 'hello-world:latest' locally
docker: Error response from daemon: Get "https://registry-1.docker.io/v2/": dial tcp: lookup registry-1.docker.io on [::1]:53: read udp [::1]:50383->[::1]:53: read: connection refused.
Some information:
[greenjoy#greenjoyPC ~]$ docker info
Client:
Context: default
Debug Mode: false
Plugins:
compose: Docker Compose (Docker Inc., v2.13.0)
dev: Docker Dev Environments (Docker Inc., v0.0.5)
extension: Manages Docker extensions (Docker Inc., v0.2.16)
sbom: View the packaged-based Software Bill Of Materials (SBOM) for an image (Anchore Inc., 0.6.0)
scan: Docker Scan (Docker Inc., v0.22.0)
Server:
Containers: 0
Running: 0
Paused: 0
Stopped: 0
Images: 0
Server Version: 20.10.17
Storage Driver: overlay2
Backing Filesystem: extfs
Supports d_type: true
Native Overlay Diff: false
userxattr: false
Logging Driver: json-file
Cgroup Driver: systemd
Cgroup Version: 2
Plugins:
Volume: local
Network: bridge host ipvlan macvlan null overlay
Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
Swarm: inactive
Runtimes: io.containerd.runc.v2 io.containerd.runtime.v1.linux runc
Default Runtime: runc
Init Binary: docker-init
containerd version: 10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1
runc version:
init version: de40ad0
Security Options:
seccomp
Profile: default
cgroupns
Kernel Version: 5.15.81-1-MANJARO
Operating System: Ubuntu Core 18
OSType: linux
Architecture: x86_64
CPUs: 4
Total Memory: 11.58GiB
Name: greenjoyPC
ID: 7JRW:4CYG:5CUT:PC2B:HOVA:7OPT:I6PR:3AD5:DYD7:2FOK:MVMU:ZCYH
Docker Root Dir: /var/snap/docker/common/var-lib-docker
Debug Mode: false
Registry: https://index.docker.io/v1/
Labels:
Experimental: false
Insecure Registries:
127.0.0.0/8
Live Restore Enabled: false
And docker-desktop won't load no matter how long you wait.Just docker desktop starting...
Tried several ways from different sources but nothing works.

I am using Manjaro and after starting the daemon and putting my user into the docker-group everything works as expected (using your particular docker run command).
However, from your error message, it rather looks like a connectivity problem to the server docker repositories. You may check on your firewalls whether connections are blocked.
There is a similar case here, where setting a different DNS server seemed to work: why do i get this error when pulling an image in docker

Related

docker pull/login always using http instead of https

I installed a docker in ubuntu live 22.04 which installed on vmwrae. And I set the docker daemon.json like this
{
"registry-mirrors" : [
"https://my-domain.com"
],
"insecure-registries": [
]
}
Which https://my-domain.com is my private registry, and It is installed on another machine.
But when I use docker pull or login my private registry, docker always use http instead of https
root#root:~# docker pull my-domain.com/example/hello-world
Error response from daemon: Get "http://my-domain.com/v2/": dial tcp [::1]:80: connect: connection refused
root#root:~# docker login my-domain.com
Username: ******
Password: ******
Error response from daemon: Get "http://my-domain.com/v2/": dial tcp [::1]:80: connect: connection refused
Why does my docker always use http instead of https? I haven't set the relevant configuration.
this is my docker info:
Client:
Context: default
Debug Mode: false
Plugins:
app: Docker App (Docker Inc., v0.9.1-beta3)
buildx: Docker Buildx (Docker Inc., v0.9.1-docker)
compose: Docker Compose (Docker Inc., v2.14.1)
scan: Docker Scan (Docker Inc., v0.23.0)
Server:
Containers: 1
Running: 1
Paused: 0
Stopped: 0
Images: 2
Server Version: 20.10.22
Storage Driver: overlay2
Backing Filesystem: extfs
Supports d_type: true
Native Overlay Diff: true
userxattr: false
Logging Driver: json-file
Cgroup Driver: systemd
Cgroup Version: 2
Plugins:
Volume: local
Network: bridge host ipvlan macvlan null overlay
Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
Swarm: inactive
Runtimes: io.containerd.runc.v2 io.containerd.runtime.v1.linux runc
Default Runtime: runc
Init Binary: docker-init
containerd version: 9ba4b250366a5ddde94bb7c9d1def331423aa323
runc version: v1.1.4-0-g5fd4c4d
init version: de40ad0
Security Options:
apparmor
seccomp
Profile: default
cgroupns
Kernel Version: 5.15.0-57-generic
Operating System: Ubuntu 22.04.1 LTS
OSType: linux
Architecture: x86_64
CPUs: 1
Total Memory: 1.896GiB
Name: suyj
ID: HB2H:FWFC:GOZT:K7HR:EFLZ:Z6TM:MJCC:MS3W:EO44:NZ4G:W3WZ:TWGJ
Docker Root Dir: /var/lib/docker
Debug Mode: false
Registry: https://index.docker.io/v1/
Labels:
Experimental: false
Insecure Registries:
127.0.0.0/8
Registry Mirrors:
https://my-domain.com/
Live Restore Enabled: false
I know what happen with my docker.
I changed the configuration (/etc/netplan/00-installer-config.yaml) to fix the IP address of the virtual machine.
network:
ethernets:
ens33:
dhcp4: false
dhcp6: true
optional: true
addresses:
- 192.168.188.137/24
routes:
- to: default
via: 192.168.188.2
nameservers:
addresses:
- 114.114.114.114
- 8.8.8.8
search:
- localhost
- local
version: 2
renderer: NetworkManager
But my domain name is intranet, not public. When I ping the private registry domain, it not return the right IP because it try to find the domain IP from public network.
root#root:~# ping my-domain.com
PING docker-iottest.midea.com(ip6-localhost (::1)) 56 data bytes
64 bytes from ip6-localhost (::1): icmp_seq=1 ttl=64 time=0.040 ms
64 bytes from ip6-localhost (::1): icmp_seq=2 ttl=64 time=0.034 ms
I must be set vmware network gateway as DNS address to analyze.
network:
ethernets:
ens33:
......
nameservers:
addresses:
- 192.168.188.2 # add the vmware network gateway IP
- 114.114.114.114
- 8.8.8.8
......
When I reload the network or reload the system, it is work and return the correct IP.

docker insecure registry, http: server gave HTTP response to HTTPS client

I'm trying to push my local docker images to docker registry that runs on another machine on my local network. it's working fine when I try to push from registry host machine. But I'm unable to push it from my computer. And I've added insecure-registries parameter to /etc/docker/daemon.json file on host machine properly but still nothing.
docker info on docker registry host machine
Client:
Context: default
Debug Mode: false
Plugins:
app: Docker App (Docker Inc., v0.9.1-beta3)
buildx: Docker Buildx (Docker Inc., v0.8.2-docker)
compose: Docker Compose (Docker Inc., v2.6.0)
scan: Docker Scan (Docker Inc., v0.17.0)
Server:
Containers: 6
Running: 6
Paused: 0
Stopped: 0
Images: 6
Server Version: 20.10.17
Storage Driver: overlay2
Backing Filesystem: extfs
Supports d_type: true
Native Overlay Diff: true
userxattr: false
Logging Driver: json-file
Cgroup Driver: cgroupfs
Cgroup Version: 1
Plugins:
Volume: local
Network: bridge host ipvlan macvlan null overlay
Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
Swarm: inactive
Runtimes: runc io.containerd.runc.v2 io.containerd.runtime.v1.linux
Default Runtime: runc
Init Binary: docker-init
containerd version: 0197261a30bf81f1ee8e6a4dd2dea0ef95d67ccb
runc version: v1.1.3-0-g6724737
init version: de40ad0
Security Options:
apparmor
seccomp
Profile: default
Kernel Version: 5.4.0-99-generic
Operating System: Ubuntu 20.04.4 LTS
OSType: linux
Architecture: x86_64
CPUs: 2
Total Memory: 1.935GiB
Name: mongo-dav-ubuntu
ID: KAQO:FLF5:CNCJ:M6GN:W6ML:LBGW:YJ5S:IPM4:FJLF:FH5G:BIXU:HBUR
Docker Root Dir: /var/lib/docker
Debug Mode: false
Registry: https://index.docker.io/v1/
Labels:
Experimental: false
Insecure Registries:
{host-machine-local-domain}:9000
127.0.0.0/8
Live Restore Enabled: false
WARNING: No swap limit support
error I'm getting on my computer
The push refers to repository [{host-machine-local-domain-name}:9000/alpine]
Get "https://{host-machine-local-domain-name}:9000/v2/": http: server gave HTTP response to HTTPS client
docker ps output
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
4a55c18e4141 registry:2 "/entrypoint.sh /etc…" About an hour ago Up 25 seconds 5000/tcp, 0.0.0.0:9000->80/tcp, :::9000->80/tcp vibrant_shockley
note that I've set HTTP_SERVER_ADDR to 80

Failed to get containers "/" in cAdvisor Docker

I runned cadvisor docker with this command
docker run \
--volume=/:/rootfs:ro \
--volume=/var/run:/var/run:rw \
--volume=/sys:/sys:ro \
--volume=/var/lib/docker/:/var/lib/docker:ro \
--volume=/dev/disk/:/dev/disk:ro \
--publish=8080:8080 --detach=true \
--name=cadvisor \
gcr.io/google-containers/cadvisor:v0.36.0
I can run without getting error:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
a0aa3cc8d5f2 gcr.io/google-containers/cadvisor:v0.36.0 "/usr/bin/cadvisor -…" 48 seconds ago Up 48 seconds (healthy) 0.0.0.0:8080->8080/tcp, :::8080->8080/tcp cadvisor
When I go to localhost:8080, I give me this error messages
failed to get container "/" with error: unable to find data in memory cache
I tried with sudo docker run still same error.
How can I fix this? Is this docker related or cadvisor?
Here is my system info:
OS : Ubuntu 22.04
Docker Info:
Client:
Context: default
Debug Mode: false
Plugins:
app: Docker App (Docker Inc., v0.9.1-beta3)
buildx: Docker Buildx (Docker Inc., v0.8.2-docker)
compose: Docker Compose (Docker Inc., v2.6.0)
scan: Docker Scan (Docker Inc., v0.17.0)
Server:
Containers: 2
Running: 1
Paused: 0
Stopped: 1
Images: 3
Server Version: 20.10.17
Storage Driver: overlay2
Backing Filesystem: extfs
Supports d_type: true
Native Overlay Diff: false
userxattr: true
Logging Driver: json-file
Cgroup Driver: systemd
Cgroup Version: 2
Plugins:
Volume: local
Network: bridge host ipvlan macvlan null overlay
Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
Swarm: inactive
Runtimes: io.containerd.runc.v2 io.containerd.runtime.v1.linux runc
Default Runtime: runc
Init Binary: docker-init
containerd version: 10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1
runc version: v1.1.2-0-ga916309
init version: de40ad0
Security Options:
seccomp
Profile: default
rootless
cgroupns
Kernel Version: 5.15.0-40-generic
Operating System: Ubuntu 22.04 LTS
OSType: linux
Architecture: x86_64
CPUs: 12
Total Memory: 13.51GiB
Name: ubuntu-yan
ID: EHQF:VYEN:4YZV:GQ6C:THPI:5J3F:A5JS:OLR7:H4QN:Q5Q5:EATM:2RXR
Docker Root Dir: /home/yanpaing/.local/share/docker
Debug Mode: false
Registry: https://index.docker.io/v1/
Labels:
Experimental: false
Insecure Registries:
127.0.0.0/8
Live Restore Enabled: false
WARNING: No cpu cfs quota support
WARNING: No cpu cfs period support
WARNING: No cpu shares support
WARNING: No cpuset support
WARNING: No io.weight support
WARNING: No io.weight (per device) support
WARNING: No io.max (rbps) support
WARNING: No io.max (wbps) support
WARNING: No io.max (riops) support
WARNING: No io.max (wiops) support
I had this issue because old cadvisor version don't support "recent" cgroups version.
I upgraded to fix the issue.
Troubleshooting tips
Use docker logs to retrieve cadvisor logs. Mine showed errors like:
W0107 09:20:00.064502 1 container.go:526] Failed to update stats for container "/system.slice/cups.service": \
failed to parse memory.usage_in_bytes - open /sys/fs/cgroup/system.slice/cups.service/memory.usage_in_bytes: \
no such file or directory, continuing to push stats

Docker pull fails during extraction with "lchown /usr/bin/sbd no such file or directory"

I've been battling trying to get Docker installed on RHEL7 and, now that I've been able to get it installed, I'm stuck just trying to do a simple docker pull.
I was able to finally get Docker installed using my proposed solution here Issues installing Docker on RHEL 7 Linux Server, but now during the extraction process, I get the following error:
latest: Pulling from [my-repo]
8657e219e309: Pull complete
a8db9e62fad8: Extracting [==================================================>] 3.507 GB/3.507 GB
failed to register layer: ApplyLayer exit status 1 stdout: stderr: lchown /usr/bin/sbd: no such file or directory
Unable to find image '[my-docker-repo]:latest' locally
latest: Pulling from [my-repo]
8657e219e309: Pull complete
a8db9e62fad8: Extracting [==================================================>] 3.507 GB/3.507 GB
docker: failed to register layer: ApplyLayer exit status 1 stdout: stderr: lchown /usr/bin/sbd: no such file or directory.
I'm not sure if this is related to the way I installed docker or if it's actually something else. I only installed docker using the following two commands:
yum install -y https://download.docker.com/linux/centos/7/x86_64/stable/Packages/docker-ce-selinux-17.03.0.ce-1.el7.centos.noarch.rpm
yum install -y https://download.docker.com/linux/centos/7/x86_64/stable/Packages/docker-ce-17.03.0.ce-1.el7.centos.x86_64.rpm
I can run docker just fine and start the service, so not sure it's the installation that's the issue per se.
The only two issues I've found on the Internet that seems somewhat related to mine are these:
https://github.com/moby/moby/issues/41803
https://github.com/moby/moby/issues/41821
However, neither one of these issues have solutions other than merged pull requests that apparently still aren't fixed in my case.
I've also visited https://docs.docker.com/engine/security/rootless/#prerequisites and verified that the value shows 65,535 in my /etc/subuid and /etc/subgid values.
Still no luck.
Here's the output of my docker info command:
Client:
Context: default
Debug Mode: false
Plugins:
app: Docker App (Docker Inc., v0.9.1-beta3)
buildx: Build with BuildKit (Docker Inc., v0.5.1-docker)
Server:
Containers: 0
Running: 0
Paused: 0
Stopped: 0
Images: 0
Server Version: 20.10.5
Storage Driver: overlay2
Backing Filesystem: xfs
Supports d_type: true
Native Overlay Diff: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Cgroup Version: 1
Plugins:
Volume: local
Network: bridge host ipvlan macvlan null overlay
Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
Swarm: inactive
Runtimes: io.containerd.runc.v2 io.containerd.runtime.v1.linux runc
Default Runtime: runc
Init Binary: docker-init
containerd version: 05f951a3781f4f2c1911b05e61c160e9c30eaa8e
runc version: 12644e614e25b05da6fd08a38ffa0cfe1903fdec
init version: de40ad0
Security Options:
seccomp
Profile: default
Kernel Version: 3.10.0-1160.21.1.el7.x86_64
Operating System: Red Hat Enterprise Linux Server 7.9 (Maipo)
OSType: linux
Architecture: x86_64
CPUs: 2
Total Memory: 7.349GiB
Name: d8de679d27f2453
ID: L43V:XEXI:6B6D:A3K4:KCI5:VQB7:MOG4:7TO5:QATR:5PM5:QT2Q:TTN5
Docker Root Dir: /var/lib/docker
Debug Mode: false
Registry: https://index.docker.io/v1/
Labels:
Experimental: false
Insecure Registries:
127.0.0.0/8
Live Restore Enabled: false
WARNING: bridge-nf-call-iptables is disabled
WARNING: bridge-nf-call-ip6tables is disabled

Can't restart docker container: OCI runtime create failed: container with id exist

I'm a new in a Docker, and I've tried to find solution in the google befor ask question - no result.
I decided to learn docker via practical use case - create PostgreSQL container into my VM instance for develop enviroment.
I've been in vacation and didn't check my server several days. Later I tried to connect to my DB, and couldnt - all of my active containers was exited with code 128.
I tried to start again container with DB - docker start django-postgres and got error message - Error response from daemon: OCI runtime create failed: container with id exists: 5c11e724bf52dd1cb6fd10ebda40710385e412981eb269c30071ecc8aac9e805: unknown
Error: failed to start containers: django-postgres
I suspect that somewhere in my system docker keeps some metadata of my container which didn't removed after container was down with code 128, but my knowledge of unix doesn't enough to determine where is it can be. Also, I'm affraid of lost my DB data connected with container.
Some techincal info:
docker version:
Version: 18.03.0-ce
API version: 1.37
Go version: go1.9.4
Git commit: 0520e24
Built: Wed Mar 21 23:10:01 2018
OS/Arch: linux/amd64
Experimental: false
Orchestrator: swarm
docker info
Containers: 9
Running: 2
Paused: 0
Stopped: 7
Images: 5
Server Version: 18.03.0-ce
Storage Driver: overlay2
Backing Filesystem: extfs
Supports d_type: true
Native Overlay Diff: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
Volume: local
Network: bridge host macvlan null overlay
Log: awslogs fluentd gcplogs gelf journald json-file logentries splunk syslog
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: cfd04396dc68220d1cecbe686a6cc3aa5ce3667c
runc version: 4fc53a81fb7c994640722ac585fa9ca548971871
init version: 949e6fa
Security Options:
apparmor
seccomp
Profile: default
Kernel Version: 4.4.0-116-generic
Operating System: Ubuntu 16.04.4 LTS
OSType: linux
Architecture: x86_64
CPUs: 1
Total Memory: 488.3MiB
ID: NDUH:OH24:4M4L:TR5O:TOIH:ARV4:LNRP:6QNE:WEYW:TMXR:7KNK:ZPDD
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): false
Registry: https://index.docker.io/v1/
Labels:
Experimental: false
Insecure Registries:
127.0.0.0/8
Live Restore Enabled: false
WARNING: No swap limit support
Does anyone can help my understand my issue and how to fix it without lost data?
N.B. The second container that has been exited with code 128 was OpenVPN. I can't restart it also, but error was differ - cgroups: cannot found cgroup mount destination: unknown
I found solution here (github):
Temp fix is
sudo mkdir /sys/fs/cgroup/systemd
sudo mount -t cgroup -o none,name=systemd cgroup /sys/fs/cgroup/systemd
This fix coudn't helped with Postgres container.
It is possible to list all running and stopped containers using docker ps -a. -a or --all Show all containers (default shows just running).
You can find the volumes attached to your old postgres container using docker inspect <container-id> (Maybe pipe to less and search for volumes)
If you want to recover your data, you can attach it to a new postgres container and recover it. (If it is a root volume change target to /)
docker run --name new-postgres \
--mount source=myoldvol,target=/var/lib/postgresql/data -d postgres
And then you can remove the old one by using docker rm <container-id>.
For more information please see,
docker ps,
docker volumes,
docker rm

Resources