I've been running my application using docker compose for a while now. One of the heaviest parts of the application are the background tasks.
I noticed that most of my background tasks (running with sidekiq) were running much slower than one of my colleagues computer (not using docker).
Using docker, same background task runs in 40 seconds. On native OS it runs in 12 seconds. I tried this myself, on my machine and run it on native OS and I could confirm that it's much faster.
Docker info:
Containers: 14
Running: 4
Paused: 0
Stopped: 10
Images: 42
Server Version: 17.12.0-ce
Storage Driver: overlay2
Backing Filesystem: extfs
Supports d_type: true
Native Overlay Diff: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
Volume: local
Network: bridge host ipvlan macvlan null overlay
Log: awslogs fluentd gcplogs gelf journald json-file logentries splunk syslog
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: 89623f28b87a6004d4b785663257362d1658a729
runc version: b2567b37d7b75eb4cf325b77297b140ea686ce8f
init version: 949e6fa
Security Options:
seccomp
Profile: default
Kernel Version: 4.9.60-linuxkit-aufs
Operating System: Docker for Mac
OSType: linux
Architecture: x86_64
CPUs: 8
Total Memory: 7.786GiB
Name: linuxkit-025000000001
ID: CFFM:EFLI:4A5K:XTPG:E27S:KXJT:26SS:ZAPE:ZAFW:3BRM:E6YK:MVAA
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): true
File Descriptors: 76
Goroutines: 129
System Time: 2018-02-09T14:13:44.910242335Z
EventsListeners: 3
HTTP Proxy: docker.for.mac.http.internal:3128
HTTPS Proxy: docker.for.mac.http.internal:3129
Registry: https://index.docker.io/v1/
Labels:
Experimental: true
Insecure Registries:
127.0.0.0/8
Live Restore Enabled: false
Docker compose:
version: '3.4'
services:
sidekiq-1:
build: .
command: bundle exec sidekiq -c 4 -L log/sidekiq-1.log
tty: true
stdin_open: true
volumes:
- '.:/app'
environment:
- DATABASE_URL=postgres://username#postgres/database
- REDIS_URL=redis://redis:6379
sidekiq-2:
build: .
command: bundle exec sidekiq -c 4 -L log/sidekiq-2.log
tty: true
stdin_open: true
volumes:
- '.:/app'
environment:
- DATABASE_URL=postgres://username#postgres/database
- REDIS_URL=redis://redis:6379
I'm a bit lost in regards to what might be happening.
One of the things I noticed is that even though I have allocated 8 cores to docker, only 4 threads run at the same time on sidekiq and CPU usage using docker stats never goes above 80% for these 2 containers.
Any help appreciated.
Docker for Mac has known performance issues for certain workloads that are filesystem-intensive. See here and here for official info. Mounted volumes with Mac tend to be the worst. I've seen similar performance hits when mounting a mid-sized Django + node project and trying to get the runserver command to be responsive (spoiler, it isn't very in this case, too much fs overhead).
Something you can try, instead of mounting the whole app directory, mount as little as possible. Hard to say how helpful that would be not knowing what the project looks like. You should also be able to increase performance by not using a bind mount; COPY your files in via Dockerfile, then use a named volume to persist them. That puts a little bit of a damper on your development workflow, but I think it would significantly speed up the sidekiq performance.
Related
Every action throws an error.
For example:
[greenjoy#greenjoyPC ~]$ docker run hello-world
Unable to find image 'hello-world:latest' locally
docker: Error response from daemon: Get "https://registry-1.docker.io/v2/": dial tcp: lookup registry-1.docker.io on [::1]:53: read udp [::1]:50383->[::1]:53: read: connection refused.
Some information:
[greenjoy#greenjoyPC ~]$ docker info
Client:
Context: default
Debug Mode: false
Plugins:
compose: Docker Compose (Docker Inc., v2.13.0)
dev: Docker Dev Environments (Docker Inc., v0.0.5)
extension: Manages Docker extensions (Docker Inc., v0.2.16)
sbom: View the packaged-based Software Bill Of Materials (SBOM) for an image (Anchore Inc., 0.6.0)
scan: Docker Scan (Docker Inc., v0.22.0)
Server:
Containers: 0
Running: 0
Paused: 0
Stopped: 0
Images: 0
Server Version: 20.10.17
Storage Driver: overlay2
Backing Filesystem: extfs
Supports d_type: true
Native Overlay Diff: false
userxattr: false
Logging Driver: json-file
Cgroup Driver: systemd
Cgroup Version: 2
Plugins:
Volume: local
Network: bridge host ipvlan macvlan null overlay
Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
Swarm: inactive
Runtimes: io.containerd.runc.v2 io.containerd.runtime.v1.linux runc
Default Runtime: runc
Init Binary: docker-init
containerd version: 10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1
runc version:
init version: de40ad0
Security Options:
seccomp
Profile: default
cgroupns
Kernel Version: 5.15.81-1-MANJARO
Operating System: Ubuntu Core 18
OSType: linux
Architecture: x86_64
CPUs: 4
Total Memory: 11.58GiB
Name: greenjoyPC
ID: 7JRW:4CYG:5CUT:PC2B:HOVA:7OPT:I6PR:3AD5:DYD7:2FOK:MVMU:ZCYH
Docker Root Dir: /var/snap/docker/common/var-lib-docker
Debug Mode: false
Registry: https://index.docker.io/v1/
Labels:
Experimental: false
Insecure Registries:
127.0.0.0/8
Live Restore Enabled: false
And docker-desktop won't load no matter how long you wait.Just docker desktop starting...
Tried several ways from different sources but nothing works.
I am using Manjaro and after starting the daemon and putting my user into the docker-group everything works as expected (using your particular docker run command).
However, from your error message, it rather looks like a connectivity problem to the server docker repositories. You may check on your firewalls whether connections are blocked.
There is a similar case here, where setting a different DNS server seemed to work: why do i get this error when pulling an image in docker
I runned cadvisor docker with this command
docker run \
--volume=/:/rootfs:ro \
--volume=/var/run:/var/run:rw \
--volume=/sys:/sys:ro \
--volume=/var/lib/docker/:/var/lib/docker:ro \
--volume=/dev/disk/:/dev/disk:ro \
--publish=8080:8080 --detach=true \
--name=cadvisor \
gcr.io/google-containers/cadvisor:v0.36.0
I can run without getting error:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
a0aa3cc8d5f2 gcr.io/google-containers/cadvisor:v0.36.0 "/usr/bin/cadvisor -…" 48 seconds ago Up 48 seconds (healthy) 0.0.0.0:8080->8080/tcp, :::8080->8080/tcp cadvisor
When I go to localhost:8080, I give me this error messages
failed to get container "/" with error: unable to find data in memory cache
I tried with sudo docker run still same error.
How can I fix this? Is this docker related or cadvisor?
Here is my system info:
OS : Ubuntu 22.04
Docker Info:
Client:
Context: default
Debug Mode: false
Plugins:
app: Docker App (Docker Inc., v0.9.1-beta3)
buildx: Docker Buildx (Docker Inc., v0.8.2-docker)
compose: Docker Compose (Docker Inc., v2.6.0)
scan: Docker Scan (Docker Inc., v0.17.0)
Server:
Containers: 2
Running: 1
Paused: 0
Stopped: 1
Images: 3
Server Version: 20.10.17
Storage Driver: overlay2
Backing Filesystem: extfs
Supports d_type: true
Native Overlay Diff: false
userxattr: true
Logging Driver: json-file
Cgroup Driver: systemd
Cgroup Version: 2
Plugins:
Volume: local
Network: bridge host ipvlan macvlan null overlay
Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
Swarm: inactive
Runtimes: io.containerd.runc.v2 io.containerd.runtime.v1.linux runc
Default Runtime: runc
Init Binary: docker-init
containerd version: 10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1
runc version: v1.1.2-0-ga916309
init version: de40ad0
Security Options:
seccomp
Profile: default
rootless
cgroupns
Kernel Version: 5.15.0-40-generic
Operating System: Ubuntu 22.04 LTS
OSType: linux
Architecture: x86_64
CPUs: 12
Total Memory: 13.51GiB
Name: ubuntu-yan
ID: EHQF:VYEN:4YZV:GQ6C:THPI:5J3F:A5JS:OLR7:H4QN:Q5Q5:EATM:2RXR
Docker Root Dir: /home/yanpaing/.local/share/docker
Debug Mode: false
Registry: https://index.docker.io/v1/
Labels:
Experimental: false
Insecure Registries:
127.0.0.0/8
Live Restore Enabled: false
WARNING: No cpu cfs quota support
WARNING: No cpu cfs period support
WARNING: No cpu shares support
WARNING: No cpuset support
WARNING: No io.weight support
WARNING: No io.weight (per device) support
WARNING: No io.max (rbps) support
WARNING: No io.max (wbps) support
WARNING: No io.max (riops) support
WARNING: No io.max (wiops) support
I had this issue because old cadvisor version don't support "recent" cgroups version.
I upgraded to fix the issue.
Troubleshooting tips
Use docker logs to retrieve cadvisor logs. Mine showed errors like:
W0107 09:20:00.064502 1 container.go:526] Failed to update stats for container "/system.slice/cups.service": \
failed to parse memory.usage_in_bytes - open /sys/fs/cgroup/system.slice/cups.service/memory.usage_in_bytes: \
no such file or directory, continuing to push stats
I've been battling trying to get Docker installed on RHEL7 and, now that I've been able to get it installed, I'm stuck just trying to do a simple docker pull.
I was able to finally get Docker installed using my proposed solution here Issues installing Docker on RHEL 7 Linux Server, but now during the extraction process, I get the following error:
latest: Pulling from [my-repo]
8657e219e309: Pull complete
a8db9e62fad8: Extracting [==================================================>] 3.507 GB/3.507 GB
failed to register layer: ApplyLayer exit status 1 stdout: stderr: lchown /usr/bin/sbd: no such file or directory
Unable to find image '[my-docker-repo]:latest' locally
latest: Pulling from [my-repo]
8657e219e309: Pull complete
a8db9e62fad8: Extracting [==================================================>] 3.507 GB/3.507 GB
docker: failed to register layer: ApplyLayer exit status 1 stdout: stderr: lchown /usr/bin/sbd: no such file or directory.
I'm not sure if this is related to the way I installed docker or if it's actually something else. I only installed docker using the following two commands:
yum install -y https://download.docker.com/linux/centos/7/x86_64/stable/Packages/docker-ce-selinux-17.03.0.ce-1.el7.centos.noarch.rpm
yum install -y https://download.docker.com/linux/centos/7/x86_64/stable/Packages/docker-ce-17.03.0.ce-1.el7.centos.x86_64.rpm
I can run docker just fine and start the service, so not sure it's the installation that's the issue per se.
The only two issues I've found on the Internet that seems somewhat related to mine are these:
https://github.com/moby/moby/issues/41803
https://github.com/moby/moby/issues/41821
However, neither one of these issues have solutions other than merged pull requests that apparently still aren't fixed in my case.
I've also visited https://docs.docker.com/engine/security/rootless/#prerequisites and verified that the value shows 65,535 in my /etc/subuid and /etc/subgid values.
Still no luck.
Here's the output of my docker info command:
Client:
Context: default
Debug Mode: false
Plugins:
app: Docker App (Docker Inc., v0.9.1-beta3)
buildx: Build with BuildKit (Docker Inc., v0.5.1-docker)
Server:
Containers: 0
Running: 0
Paused: 0
Stopped: 0
Images: 0
Server Version: 20.10.5
Storage Driver: overlay2
Backing Filesystem: xfs
Supports d_type: true
Native Overlay Diff: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Cgroup Version: 1
Plugins:
Volume: local
Network: bridge host ipvlan macvlan null overlay
Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
Swarm: inactive
Runtimes: io.containerd.runc.v2 io.containerd.runtime.v1.linux runc
Default Runtime: runc
Init Binary: docker-init
containerd version: 05f951a3781f4f2c1911b05e61c160e9c30eaa8e
runc version: 12644e614e25b05da6fd08a38ffa0cfe1903fdec
init version: de40ad0
Security Options:
seccomp
Profile: default
Kernel Version: 3.10.0-1160.21.1.el7.x86_64
Operating System: Red Hat Enterprise Linux Server 7.9 (Maipo)
OSType: linux
Architecture: x86_64
CPUs: 2
Total Memory: 7.349GiB
Name: d8de679d27f2453
ID: L43V:XEXI:6B6D:A3K4:KCI5:VQB7:MOG4:7TO5:QATR:5PM5:QT2Q:TTN5
Docker Root Dir: /var/lib/docker
Debug Mode: false
Registry: https://index.docker.io/v1/
Labels:
Experimental: false
Insecure Registries:
127.0.0.0/8
Live Restore Enabled: false
WARNING: bridge-nf-call-iptables is disabled
WARNING: bridge-nf-call-ip6tables is disabled
here’s my situation. I’m very inexperienced with any OS that isn’t windows. I’m working on a Raspberry Pi Zero W running Raspbian, with the ultimate goal of running zimdump so I can edit .zim files. A tutorial includes the use of Docker to mount the .zim file as a volume and work within a container. I seemed to have installed Docker with the correct version and architecture, but docker run hello-world doesn’t work as expected. Log from the first time I ran it:
Unable to find image ‘hello-world:latest‘ locally
latest: Pulling from library/hello-world
4ee5c797bcd7: Pull complete
Digest: sha256: [long sha256]
Status: Downloaded newer image for hello-world: latest
And nothing else. I ran it a second time, and nothing printed. The third time, I ran
sudo docker run hello-world -it
which printed more verbosely
docker: Error response from daemon: OCI runtime failed: container_linux.go:349: starting container process caused “exec: \”-it\”: executable file not found in $PATH”: unknown.
I tried an assortment of troubleshooting steps, from users whose situations were only related to mine, but not exactly, and I don’t want to alter anything else behind the scenes that makes this harder for you and me.
Here’s my docker info:
Client:
Debug Mode: false
Server:
Containers: 3
Running: 0
Paused: 0
Stopped: 3
Images: 1
Server Version: 19.03.12
Storage Driver: overlay2
Backing Filesystem: extfs
Supports d_type: true
Native Overlay Diff: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
Volume: local
Network: bridge host ipvlan macvlan null overlay
Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
Swarm: inactive
Runtimes: runc
Init Binary: docker-init
containerd version: 7ad184331fa3e55e52b890ea95e65ba581ae3429
runc version: dc9208a3303feef5b3839f4323d9beb36df0a9dd
init version: fec3683
Security Options:
seccomp
Profile: default
Kernel Version: 4.19.66+
Operating System: Raspbian GNU/Linux 10 (buster)
OSType: Linux
Architecture: armv6l
CPUs: 1
Total Memory: 424.8MiB
Name: box.lan
ID: DAJU:334L:G6WP:RARN:REWW:K2LE:CJUK:LCBJ:XDWH:ZX5D:4XRM:BCTM
Docker Root Dir: /var/lib/docker
Debug Mode: false
Registry: https://index.docker.io/v1s/
Labels:
Experimental: false
Insecure Registries:
127.0.0.0/8
Live Restore Enabled: false
WARNING: No swap limit support
WARNING: No cpu cfs quota support
WARNING: No cpu cfs period support
WARNING: No cpuset support
I’ve spent 8 hours on this, and all I want to do is remove explicit wikipedia pages from the .zim so we can give this raspberry pi to kids as an offline internet.
You all are the best ☺️
The regular version of hello-world Dock image won't work on a Raspberry Pi zero. Because Raspberry Pi zero uses ARMv6Z instruction sets. Instead of docker run hello-world, you can run:
docker run --name someContainerName arm32v5/hello-world
Notice that this container was built with ARM32v5 instruction set. In theory, any ARM32 version equal to or below v6 should work on a Pi zero.
It took me a whole day to figure out. I've written a blog post on how to get Docker working on a Raspberry Pi 1 and Zero if you want to learn about the detail.
I'm a new in a Docker, and I've tried to find solution in the google befor ask question - no result.
I decided to learn docker via practical use case - create PostgreSQL container into my VM instance for develop enviroment.
I've been in vacation and didn't check my server several days. Later I tried to connect to my DB, and couldnt - all of my active containers was exited with code 128.
I tried to start again container with DB - docker start django-postgres and got error message - Error response from daemon: OCI runtime create failed: container with id exists: 5c11e724bf52dd1cb6fd10ebda40710385e412981eb269c30071ecc8aac9e805: unknown
Error: failed to start containers: django-postgres
I suspect that somewhere in my system docker keeps some metadata of my container which didn't removed after container was down with code 128, but my knowledge of unix doesn't enough to determine where is it can be. Also, I'm affraid of lost my DB data connected with container.
Some techincal info:
docker version:
Version: 18.03.0-ce
API version: 1.37
Go version: go1.9.4
Git commit: 0520e24
Built: Wed Mar 21 23:10:01 2018
OS/Arch: linux/amd64
Experimental: false
Orchestrator: swarm
docker info
Containers: 9
Running: 2
Paused: 0
Stopped: 7
Images: 5
Server Version: 18.03.0-ce
Storage Driver: overlay2
Backing Filesystem: extfs
Supports d_type: true
Native Overlay Diff: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
Volume: local
Network: bridge host macvlan null overlay
Log: awslogs fluentd gcplogs gelf journald json-file logentries splunk syslog
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: cfd04396dc68220d1cecbe686a6cc3aa5ce3667c
runc version: 4fc53a81fb7c994640722ac585fa9ca548971871
init version: 949e6fa
Security Options:
apparmor
seccomp
Profile: default
Kernel Version: 4.4.0-116-generic
Operating System: Ubuntu 16.04.4 LTS
OSType: linux
Architecture: x86_64
CPUs: 1
Total Memory: 488.3MiB
ID: NDUH:OH24:4M4L:TR5O:TOIH:ARV4:LNRP:6QNE:WEYW:TMXR:7KNK:ZPDD
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): false
Registry: https://index.docker.io/v1/
Labels:
Experimental: false
Insecure Registries:
127.0.0.0/8
Live Restore Enabled: false
WARNING: No swap limit support
Does anyone can help my understand my issue and how to fix it without lost data?
N.B. The second container that has been exited with code 128 was OpenVPN. I can't restart it also, but error was differ - cgroups: cannot found cgroup mount destination: unknown
I found solution here (github):
Temp fix is
sudo mkdir /sys/fs/cgroup/systemd
sudo mount -t cgroup -o none,name=systemd cgroup /sys/fs/cgroup/systemd
This fix coudn't helped with Postgres container.
It is possible to list all running and stopped containers using docker ps -a. -a or --all Show all containers (default shows just running).
You can find the volumes attached to your old postgres container using docker inspect <container-id> (Maybe pipe to less and search for volumes)
If you want to recover your data, you can attach it to a new postgres container and recover it. (If it is a root volume change target to /)
docker run --name new-postgres \
--mount source=myoldvol,target=/var/lib/postgresql/data -d postgres
And then you can remove the old one by using docker rm <container-id>.
For more information please see,
docker ps,
docker volumes,
docker rm