Docker build slow on EC2 (Amazon Linux) - linux

I provisioned an instance from Amazon Machine Image based on Amazon Linux (amzn-ami-2016.03.c-amazon-ecs-optimized). While attempting to do a docker build for my project, I find the process extremely slow, even for simple tasks like setting environment variables ENV TEST_PORT=3000 etc. A build that takes less than 5 minutes on my local machine has been running for at least an hour.
Running docker info returns Storage as devicemapper and this article suggests switching to aufs but it is for Ubuntu. I also have an EBS volume attached to my instance, how do I switch docker to use that instead? Will that fix this problem?

I experienced the same problem : each simple step of the Dockerfile (like ENV or ARG) is taking one or twe seconds on my Amazon Linux EC2 box.
To solve this, I have to :
upgrade Docker to version 17.03.2-ce
switch the overlay driver of docker, as suggested by https://docs.docker.com/engine/userguide/storagedriver/overlayfs-driver/ . There is a dedicated section for CentOS.
I created /etc/docker/daemon.json with the following content :
{
"storage-driver": "overlay2",
"storage-opts": [
"overlay2.override_kernel_check=true"
]
}
stop and start the docker daemon.
Now each simple step are very fast.

Related

Docker images and containers change when docker desktop is running on linux

When docker desktop is running on linux, I see a different set of containers and images compared to when it is not running. That is, when I run docker images in the terminal, the output depends on whether docker desktop is running or not. After I 'quit docker desktop', the original behavior is restored.
I note the following changes:
docker desktop is off
docker desktop is running
images 'a, b, c'
shows images 'd, e, f'
containers 'aa, bb, cc'
containers 'dd, ee, ff'
non colored cli output
pretty colored cli output
My suspicion is that docker desktop kills a running docker service and starts a fresh one whose images and containers are located elsewhere on my filesystem. Then after quitting, the original service is restored. I'd like this behavior to change, such that the images and containers I'm working on are always the same, regardless of whether docker desktop is running or not.
I'm looking for some feedback on how to start debugging this.
Docker only runs natively on Linux. Docker Desktop is the "hack" that allows running docker on other platforms (MacOS, Windows, etc). Docker Desktop actually starts a Linux VM and runs docker inside that VM. It then takes care of mapping ports and volumes so that it appears to the end user that docker is "running directly on host".
The beauty of running Docker on linux is that it runs natively and you don't need extra hacks and tricks. So why you would use Docker Desktop on Ubuntu.... beats me :) However, the explanation of why you see different results is becuase you see different docker processes running on different machines: one on the host and one on a VM.

linuxamazon not running docker deamon [duplicate]

I'm running Jenkins inside a Docker container. I wonder if it's ok for the Jenkins container to also be a Docker host? What I'm thinking about is to start a new docker container for each integration test build from inside Jenkins (to start databases, message brokers etc). The containers should thus be shutdown after the integration tests are completed. Is there a reason to avoid running docker containers from inside another docker container in this way?
Running Docker inside Docker (a.k.a. dind), while possible, should be avoided, if at all possible. (Source provided below.) Instead, you want to set up a way for your main container to produce and communicate with sibling containers.
Jérôme Petazzoni — the author of the feature that made it possible for Docker to run inside a Docker container — actually wrote a blog post saying not to do it. The use case he describes matches the OP's exact use case of a CI Docker container that needs to run jobs inside other Docker containers.
Petazzoni lists two reasons why dind is troublesome:
It does not cooperate well with Linux Security Modules (LSM).
It creates a mismatch in file systems that creates problems for the containers created inside parent containers.
From that blog post, he describes the following alternative,
[The] simplest way is to just expose the Docker socket to your CI container, by bind-mounting it with the -v flag.
Simply put, when you start your CI container (Jenkins or other), instead of hacking something together with Docker-in-Docker, start it with:
docker run -v /var/run/docker.sock:/var/run/docker.sock ...
Now this container will have access to the Docker socket, and will therefore be able to start containers. Except that instead of starting "child" containers, it will start "sibling" containers.
I answered a similar question before on how to run a Docker container inside Docker.
To run docker inside docker is definitely possible. The main thing is that you run the outer container with extra privileges (starting with --privileged=true) and then install docker in that container.
Check this blog post for more info: Docker-in-Docker.
One potential use case for this is described in this entry. The blog describes how to build docker containers within a Jenkins docker container.
However, Docker inside Docker it is not the recommended approach to solve this type of problems. Instead, the recommended approach is to create "sibling" containers as described in this post
So, running Docker inside Docker was by many considered as a good type of solution for this type of problems. Now, the trend is to use "sibling" containers instead. See the answer by #predmijat on this page for more info.
It's OK to run Docker-in-Docker (DinD) and in fact Docker (the company) has an official DinD image for this.
The caveat however is that it requires a privileged container, which depending on your security needs may not be a viable alternative.
The alternative solution of running Docker using sibling containers (aka Docker-out-of-Docker or DooD) does not require a privileged container, but has a few drawbacks that stem from the fact that you are launching the container from within a context that is different from that one in which it's running (i.e., you launch the container from within a container, yet it's running at the host's level, not inside the container).
I wrote a blog describing the pros/cons of DinD vs DooD here.
Having said this, Nestybox (a startup I just founded) is working on a solution that runs true Docker-in-Docker securely (without using privileged containers). You can check it out at www.nestybox.com.
Yes, we can run docker in docker, we'll need to attach the unix socket /var/run/docker.sock on which the docker daemon listens by default as volume to the parent docker using -v /var/run/docker.sock:/var/run/docker.sock.
Sometimes, permissions issues may arise for docker daemon socket for which you can write sudo chmod 757 /var/run/docker.sock.
And also it would require to run the docker in privileged mode, so the commands would be:
sudo chmod 757 /var/run/docker.sock
docker run --privileged=true -v /var/run/docker.sock:/var/run/docker.sock -it ...
I was trying my best to run containers within containers just like you for the past few days. Wasted many hours. So far most of the people advise me to do stuff like using the docker's DIND image which is not applicable for my case, as I need the main container to be Ubuntu OS, or to run some privilege command and map the daemon socket into container. (Which never ever works for me)
The solution I found was to use Nestybox on my Ubuntu 20.04 system and it works best. Its also extremely simple to execute, provided your local system is ubuntu (which they support best), as the container runtime are specifically deigned for such application. It also has the most flexible options. The free edition of Nestybox is perhaps the best method as of Nov 2022. Highly recommends you to try it without bothering all the tedious setup other people suggest. They have many pre-constructed solutions to address such specific needs with a simple command line.
The Nestybox provide special runtime environment for newly created docker container, they also provides some ubuntu/common OS images with docker and systemd in built.
Their goal is to make the main container function exactly the same as a virtual machine securely. You can literally ssh into your ubuntu main container as well without the ability to access anything in the main machine. From your main container you may create all kinds of containers like a normal local system does. That systemd is very important for you to setup docker conveniently inside the container.
One simple common command to execute sysbox:
dock run --runtime=sysbox-runc -it any_image
If you think thats what you are looking for, you can find out more at their github:
https://github.com/nestybox/sysbox
Quicklink to instruction on how to deploy a simple sysbox runtime environment container: https://github.com/nestybox/sysbox/blob/master/docs/quickstart/README.md

jackd cannot use real-time scheduling within docker container

I cannot for the life of me figure out how to get jackd to run with realtime priority within a docker container. My goal is to get SuperCollider audio streaming over the internet using webrtc with help from jackd/gstreamer, and I believe I need realtime scheduling enabled for jackd in order for SuperCollider to use it (this may be a red herring, and maybe SuperCollider doesn't need realtime jackd, but I still want to try this path).
I've had no success on my host machine, a 2014 MacBook Air running macOS 10.13.6, BUT I've also attempted building the image and running the container on an AWS Docker beanstalk application running on 64bit Amazon Linux/2.12.16.
Here's a link to the repo with a dockerfile and some README notes with more details/error logs: https://github.com/osetinsky/pulse-streamer/tree/supercollider-example/examples/supercollider
I've spent hours reading similar issues people have had, attempting the following:
setting rtprio and memlock within various limits.conf files set in various directories with the appropriate user groups: http://jackaudio.org/faq/linux_rt_config.html
updated cgconfig and cgrules for all three of the approaches suggested here: https://github.com/jackaudio/jackaudio.github.com/wiki/Cgroups
running docker with realtime scheduler options: https://docs.docker.com/config/containers/resource_constraints/#configure-the-realtime-scheduler (running with --cap-add=sys_nice --cpu-rt-runtime=950000 --ulimit rtprio=99)
The last approach seemed promising, but fails when I run:
docker run --privileged --ulimit rtprio=99 --ulimit memlock=800000000 --cpu-rt-runtime=950000 --volume=/dev/shm:/dev/shm:rw --user=1000 -it jack-sc bash
docker: Error response from daemon: OCI runtime create failed:
container_linux.go:345: starting container process caused
"process_linux.go:281: applying cgroup configuration for process caused
\"failed to write 950000 to cpu.rt_runtime_us: write /sys/fs/cgroup/cpu/docker/1bc42e297356c9ad6f734726590c7f42450fcae44e0020950
68b8f10bc93d4f3/cpu.rt_runtime_us: invalid argument\"": unknown.
For the record, I am able to stream a test tone in realtime with pulseaudio and gstreamer using Pion (a go implementation of webrtc): https://github.com/osetinsky/pulse-streamer
I just want that test tone to be replaced with realtime audio from SuperCollider :)
Similar issues mentioned here:
https://github.com/moby/moby/issues/31411
https://bugzilla.redhat.com/show_bug.cgi?id=1178246
(and many more...)
Thanks!

Export/Save? Docker container on Linux to hard drive in order to Import/Load it in Windows 10 without Hyper-V?

I have a large container that I cannot get via network (due to shitty internet connection), so I need a way to export that container to hard drive in order to use it on my Windows machine. So basically:
Docker container running on Linux ->
Export/Save on hard drive ->
Import/Load on Windows ->
Run on Windows 10 with/without Hyper-V?
How can I achieve this? I'm confused about Export/Import and Load/Save? Can you give full command line commands?
Let's assume this is my container:
Container ID: 638aac32ff06
Image: registry.mycompany.com/db:latest
Ports: 0.0.0.0:5432->5432/tcp
Name: db
You can't. Containers are created under Linux won't work under clear Windows. I hope in the future when MS will make complite release Ubuntu subsystem under Windows it will be possible. But not now.
Yes, it seems possible now! (which is quite amazing!)
On Linux (source machine) run:
docker save {container_name} -o {path_to_save}.tar
Then, on Windows (target machine) run:
docker load -i {path_to_save}.tar
That's all! (Be sure docker desktop is set to use Linux containers from tray icon menu)

Docker containers as Linux services?

I just created a secure Docker Registry and ran it on a remote VM (using docker run ...). I then ran docker ps and saw that it is in fact running. I exited the machine and then SSHed back in. Again, I ran docker ps and verified it "survived" me exiting the SSH session.
This has me wondering: do Docker containers actually run as Linux services? If not, is there any way of getting them to run as traditional (upstart- or systemd-based) services? Is there even any reason/merit to do so?
The docker engine runs as a daemon.
That is mentioned in "Host integration":
As of Docker 1.2, restart policies are the built-in Docker mechanism for restarting containers when they exit. If set, restart policies will be used when the Docker daemon starts up, as typically happens after a system boot. Restart policies will ensure that linked containers are started in the correct order.
If restart policies don’t suit your needs (i.e., you have non-Docker processes that depend on Docker containers), you can use a process manager like upstart, systemd or supervisor instead.
That involves (when a container runs with some options) some security fraught, by the way: see issue 14767 and issue 6401:
The container (with --net host option) is the host when it comes to the network stack so any services running on the host are accessible to the container. It just so happens that you communicate to upstart ( and others ) this way.
This feature is a runtime only option, just like the --privileged flag, therefore an image cannot request this, it must be explicitly set at runtime.

Resources