jackd cannot use real-time scheduling within docker container - linux

I cannot for the life of me figure out how to get jackd to run with realtime priority within a docker container. My goal is to get SuperCollider audio streaming over the internet using webrtc with help from jackd/gstreamer, and I believe I need realtime scheduling enabled for jackd in order for SuperCollider to use it (this may be a red herring, and maybe SuperCollider doesn't need realtime jackd, but I still want to try this path).
I've had no success on my host machine, a 2014 MacBook Air running macOS 10.13.6, BUT I've also attempted building the image and running the container on an AWS Docker beanstalk application running on 64bit Amazon Linux/2.12.16.
Here's a link to the repo with a dockerfile and some README notes with more details/error logs: https://github.com/osetinsky/pulse-streamer/tree/supercollider-example/examples/supercollider
I've spent hours reading similar issues people have had, attempting the following:
setting rtprio and memlock within various limits.conf files set in various directories with the appropriate user groups: http://jackaudio.org/faq/linux_rt_config.html
updated cgconfig and cgrules for all three of the approaches suggested here: https://github.com/jackaudio/jackaudio.github.com/wiki/Cgroups
running docker with realtime scheduler options: https://docs.docker.com/config/containers/resource_constraints/#configure-the-realtime-scheduler (running with --cap-add=sys_nice --cpu-rt-runtime=950000 --ulimit rtprio=99)
The last approach seemed promising, but fails when I run:
docker run --privileged --ulimit rtprio=99 --ulimit memlock=800000000 --cpu-rt-runtime=950000 --volume=/dev/shm:/dev/shm:rw --user=1000 -it jack-sc bash
docker: Error response from daemon: OCI runtime create failed:
container_linux.go:345: starting container process caused
"process_linux.go:281: applying cgroup configuration for process caused
\"failed to write 950000 to cpu.rt_runtime_us: write /sys/fs/cgroup/cpu/docker/1bc42e297356c9ad6f734726590c7f42450fcae44e0020950
68b8f10bc93d4f3/cpu.rt_runtime_us: invalid argument\"": unknown.
For the record, I am able to stream a test tone in realtime with pulseaudio and gstreamer using Pion (a go implementation of webrtc): https://github.com/osetinsky/pulse-streamer
I just want that test tone to be replaced with realtime audio from SuperCollider :)
Similar issues mentioned here:
https://github.com/moby/moby/issues/31411
https://bugzilla.redhat.com/show_bug.cgi?id=1178246
(and many more...)
Thanks!

Related

linuxamazon not running docker deamon [duplicate]

I'm running Jenkins inside a Docker container. I wonder if it's ok for the Jenkins container to also be a Docker host? What I'm thinking about is to start a new docker container for each integration test build from inside Jenkins (to start databases, message brokers etc). The containers should thus be shutdown after the integration tests are completed. Is there a reason to avoid running docker containers from inside another docker container in this way?
Running Docker inside Docker (a.k.a. dind), while possible, should be avoided, if at all possible. (Source provided below.) Instead, you want to set up a way for your main container to produce and communicate with sibling containers.
Jérôme Petazzoni — the author of the feature that made it possible for Docker to run inside a Docker container — actually wrote a blog post saying not to do it. The use case he describes matches the OP's exact use case of a CI Docker container that needs to run jobs inside other Docker containers.
Petazzoni lists two reasons why dind is troublesome:
It does not cooperate well with Linux Security Modules (LSM).
It creates a mismatch in file systems that creates problems for the containers created inside parent containers.
From that blog post, he describes the following alternative,
[The] simplest way is to just expose the Docker socket to your CI container, by bind-mounting it with the -v flag.
Simply put, when you start your CI container (Jenkins or other), instead of hacking something together with Docker-in-Docker, start it with:
docker run -v /var/run/docker.sock:/var/run/docker.sock ...
Now this container will have access to the Docker socket, and will therefore be able to start containers. Except that instead of starting "child" containers, it will start "sibling" containers.
I answered a similar question before on how to run a Docker container inside Docker.
To run docker inside docker is definitely possible. The main thing is that you run the outer container with extra privileges (starting with --privileged=true) and then install docker in that container.
Check this blog post for more info: Docker-in-Docker.
One potential use case for this is described in this entry. The blog describes how to build docker containers within a Jenkins docker container.
However, Docker inside Docker it is not the recommended approach to solve this type of problems. Instead, the recommended approach is to create "sibling" containers as described in this post
So, running Docker inside Docker was by many considered as a good type of solution for this type of problems. Now, the trend is to use "sibling" containers instead. See the answer by #predmijat on this page for more info.
It's OK to run Docker-in-Docker (DinD) and in fact Docker (the company) has an official DinD image for this.
The caveat however is that it requires a privileged container, which depending on your security needs may not be a viable alternative.
The alternative solution of running Docker using sibling containers (aka Docker-out-of-Docker or DooD) does not require a privileged container, but has a few drawbacks that stem from the fact that you are launching the container from within a context that is different from that one in which it's running (i.e., you launch the container from within a container, yet it's running at the host's level, not inside the container).
I wrote a blog describing the pros/cons of DinD vs DooD here.
Having said this, Nestybox (a startup I just founded) is working on a solution that runs true Docker-in-Docker securely (without using privileged containers). You can check it out at www.nestybox.com.
Yes, we can run docker in docker, we'll need to attach the unix socket /var/run/docker.sock on which the docker daemon listens by default as volume to the parent docker using -v /var/run/docker.sock:/var/run/docker.sock.
Sometimes, permissions issues may arise for docker daemon socket for which you can write sudo chmod 757 /var/run/docker.sock.
And also it would require to run the docker in privileged mode, so the commands would be:
sudo chmod 757 /var/run/docker.sock
docker run --privileged=true -v /var/run/docker.sock:/var/run/docker.sock -it ...
I was trying my best to run containers within containers just like you for the past few days. Wasted many hours. So far most of the people advise me to do stuff like using the docker's DIND image which is not applicable for my case, as I need the main container to be Ubuntu OS, or to run some privilege command and map the daemon socket into container. (Which never ever works for me)
The solution I found was to use Nestybox on my Ubuntu 20.04 system and it works best. Its also extremely simple to execute, provided your local system is ubuntu (which they support best), as the container runtime are specifically deigned for such application. It also has the most flexible options. The free edition of Nestybox is perhaps the best method as of Nov 2022. Highly recommends you to try it without bothering all the tedious setup other people suggest. They have many pre-constructed solutions to address such specific needs with a simple command line.
The Nestybox provide special runtime environment for newly created docker container, they also provides some ubuntu/common OS images with docker and systemd in built.
Their goal is to make the main container function exactly the same as a virtual machine securely. You can literally ssh into your ubuntu main container as well without the ability to access anything in the main machine. From your main container you may create all kinds of containers like a normal local system does. That systemd is very important for you to setup docker conveniently inside the container.
One simple common command to execute sysbox:
dock run --runtime=sysbox-runc -it any_image
If you think thats what you are looking for, you can find out more at their github:
https://github.com/nestybox/sysbox
Quicklink to instruction on how to deploy a simple sysbox runtime environment container: https://github.com/nestybox/sysbox/blob/master/docs/quickstart/README.md

Error response from daemon: join session keyring: create session key: disk quota exceeded

I tried installing docker on a server of mine using this tutorial.
I want to run docker images remotely and use the portainer web-interface to administrate everything.
However, when I get to the point where I need to test my installation and I enter the command $ sudo docker run hello-world, I only get the following error:
docker: Error response from daemon: OCI runtime create failed: container_linux.go:344: starting container process caused "process_linux.go:424: container init caused \"join session keyring: create session key: disk quota exceeded\"": unknown. ERRO[0000] error waiting for container: context canceled
I tried the following methods:
"Install Docker CE / Install using the convenience script"
"Install Docker CE / Install using the repository"
This also happens when I try to run other images (eg. portainer).
I hope this is enough information.
I am new to docker, so I don't know how I should debug it efficiently.
Try to increase maxkeys kernel parameter:
echo 50000 > /proc/sys/kernel/keys/maxkeys
see: https://discuss.linuxcontainers.org/t/error-with-docker-inside-lxc-container/922/2
So, as it turns out, I connected to the wrong vServer.
The one I was connected to is using LXD (as you might have seen in my previous comment), which doesn't support Docker (at least not the way this guide advises).
When I ran the same setup on a vServer using a bare-metal(type 1) hypervisor, it worked without a problem.
I think this has to do with automatic storage allocation under LXD, but this is just a guess.

Systemctl command getting this error: Failed to et D-bus: Unkknown error -1 | Docker Opensuse

In opensuse docker container, cronjob is not working. When I try systemctl command getting this error: Failed to et D-bus: Unknown error -1 . I have tried many blogs and stackoverflow questions everywhere It was advised that basic architecture of Docker image should be redesigned.
What exactly needs to be done here is not mentioned. Kindly help, I am stuck on this issue.
To a first approximation, commands like systemctl, initctl, service, or start just don't work in Docker and you should find a different way to do what you're attempting.
Stylewise, the standard way to use a Docker container is to launch some sort of service in the foreground. As one specific example, the standard Redis image doesn't go through any sort of init script; it just runs
CMD ["redis-server"]
In most Docker images it's unusual to even so much as launch a background process (with the shell & operator). It's not usually necessary and in Dockerfiles the interaction with the RUN directive has confused some people.
In the specific case of systemctl, it requires an extremely heavyweight init system that is not just a process manager but also wants to monitor and manage kernel-level parameters, includes a logging system, runs an inter-process message bus, and some other functionality. You can't run systemd under Docker without the container being --privileged, which gives the container the ability to "escape" on to the host system in some unfortunate ways.

Docker build slow on EC2 (Amazon Linux)

I provisioned an instance from Amazon Machine Image based on Amazon Linux (amzn-ami-2016.03.c-amazon-ecs-optimized). While attempting to do a docker build for my project, I find the process extremely slow, even for simple tasks like setting environment variables ENV TEST_PORT=3000 etc. A build that takes less than 5 minutes on my local machine has been running for at least an hour.
Running docker info returns Storage as devicemapper and this article suggests switching to aufs but it is for Ubuntu. I also have an EBS volume attached to my instance, how do I switch docker to use that instead? Will that fix this problem?
I experienced the same problem : each simple step of the Dockerfile (like ENV or ARG) is taking one or twe seconds on my Amazon Linux EC2 box.
To solve this, I have to :
upgrade Docker to version 17.03.2-ce
switch the overlay driver of docker, as suggested by https://docs.docker.com/engine/userguide/storagedriver/overlayfs-driver/ . There is a dedicated section for CentOS.
I created /etc/docker/daemon.json with the following content :
{
"storage-driver": "overlay2",
"storage-opts": [
"overlay2.override_kernel_check=true"
]
}
stop and start the docker daemon.
Now each simple step are very fast.

Docker client freeze

the architecture is simple,
I have a small bash script that orchestrate
two types of containers:
1 member of type "1" talks to another member of type "2". about 15 containers of each type.
When I detect that a member of type "2" (or "1") died, i kill the other one and raise them again.
the environment is Amazon ec2, 8 core, ubuntu 14.04.
After some period of time, when trying to do 'docker ps', i get no response.
I think that the problem is the amount of times i issue 'docker stop' command.
when i did 'docker kill' instead, the problem occurs faster.
Reading the documentation i see that 'docker stop' command coincide with what i see in the docker.log - which is a lot of docker kill (this is what happens when a container does not respond to a 'docker stop' command).
Is there a problem with the docker stop/kill - and the docker memory management?
That could be linked to issue 15101, where daemon containers have trouble to be killed.
I still have this issue on docker 1.8.1.
Removing /etc/apparmor.d/docker and running sudo service apparmor reload appears to fix it
This look like a problem in the docker infrastructure, there is a pretty big thread on the subject that looks like what i asked here, i will post an answer when this issue resolve there.
Docker Daemon Hangs under load

Resources