OCI runtime error for Docker on Debian Bullseye - linux

I am superficially familiar with docker and know a bit about linux but my current situation has me out of depth.
I am repurposing an older laptop (Thinkpad T540p) to host a few network services via docker. I was able to install and run docker on it using the previous OS (Ubuntu 18 or 20 LTS), tested using docker run hello-world.
After that I reinstalled the laptop, now using Debian Bullseye. I ran apt update && apt upgrade after installing to ensure an up to date system and installed docker. When I ran docker run hello-world however, an error occurred that I have been unable to debug.
Some info:
root#machine:$ docker run hello-world
docker: Error response from daemon: OCI runtime create failed: container_linux.go:367: starting container process caused: process_linux.go:495: container init caused: process_linux.go:458: setting cgroup config for procHooks process caused: can't load program: operation not permitted: unknown.
ERRO[0002] error waiting for container: context canceled
root#machine:$ docker --version
Docker version 20.10.6, build 370c289
root#machine:$ lsb_release -a
No LSB modules are available.
Distributor ID: Debian
Description: Debian GNU/Linux 11 (bullseye)
Release: 11
Codename: bullseye
root#machine:$ uname -a
Linux machine 4.19.0-16-amd64 #1 SMP Debian 4.19.181-1 (2021-03-19) x86_64 GNU/Linux
I have been looking around for issues containing a similar error to mine and found
https://github.com/opencontainers/runc/issues/2167 (CloudLinux with cgroups/procHooks error)
https://github.com/docker/for-linux/issues/1183 (slightly different but system capability mismatch or something like that)
docker: Error response from daemon: OCI runtime create failed
All seeming to point towards some kind of Seccomp/AppArmor setting that is blocking Docker from starting, but I have no clue what to change in order to get it working. The terms AppArmor and Seccomp were random jargon to me 2 days ago so I would rather not just go edit some system config file.
Clues on what is going wrong or what to change are very much appreciated.

Related

plesk install on docker

when i want to download and run plesk in my docker container i get this error message
Warning: Cannot exists mariadb on this system: no executable /bin/systemctl
Warning: restart service mariadb failed
Warning: Cannot exists mariadb on this system: no executable /bin/systemctl
ERROR while trying to stop
MySQL
server
STOP Bootstrapper 18.0.40 prep-install for BASE AT Sun Jan 9 16:08:57 UTC 2022
and then my download will not finish
how can i fix this
I have created the docker-systemctl-replacement script to help packaging software that is not ready to be containerized.
Specifically I can run a full LAMP stack in a single container which may cover your use case altogether. Have a look at the examples in docker-systemctl-images

Building docker image FROM scratch using a buildroot linux

Okay, so this is a complicated topic, so a thanks to anyone who actually takes the time to read this. This all started by trying to create an executable from a python script to run on target arch.
The target arch is arm64. I am doing all of this on a MAC. The major gotcha is that the target device uses uclibc. if it used glibc or musl I would be able to cross compile using the ubuntu container described below or an alpine container with python. (using pyinstaller to create executable)
I created a buildx container and ran an ubuntu container on arm64 architecture (confirmed). From there I am using a tool called Buildroot from within the ubunutu container to create a custom linux filesystem. which after much waiting creates "rootfs.tar"
Okay now with all that non docker stuff out of the way. I copy this rootfs.tar file to my host and try to build an image to run my custom linux.
Docker file
FROM scratch
MAINTAINER peskyadmin
ADD rootfs.tar /
build command
docker buildx build -t "${DOCKER_USER}/testrtfs:latest" --platform linux/arm64 --push .
run command
docker run --privileged --entrypoint "/bin/sh" -it "$DOCKER_USER/testrtfs:latest" --platform linux/arm64
run output
WARNING: The requested image's platform (linux/arm64) does not match the detected host platform (linux/amd64) and no specific platform was requested
standard_init_linux.go:228: exec user process caused: exec format error
Using the latest version of Docker Desktop. I don't think that the warning is an issue because when I run the ubunutu container created with buildx it shows the same error message, but is running on target arch
My question is what am I doing wrong? I do not understand this error. my gut is telling me the issue has to do with the dockerfile but I am not sure as it could be an issue when using buildroot to create the rootfs.tar?
The target cpu is a cortex A53 which is the same that is in the raspberry pi 3. I suppose that I could try to install the image directly onto the bare metal pi and then try to cross compile on there. But I really would like to keep everything virtualized on my mac.
There is no need for any containers. Buildroot (and other build systems) do cross compiling, which means you can build for a different target than the machine you build on.
In other words, you simply select arm64 as the target architecture, make the build, then install the resulting rootfs on your target.
However, this rootfs completely replaces the target rootfs, so it's not relevant that the target is uclibc. So my guess is that you want to install just a single executable. Doing that is made more difficult with shared libraries, because you need to copy not just the executable, but also any libraries it links with. So it may help to configure Buildroot to link statically (BR2_STATIC_LIBS).
-EDIT-
If you want to run an environment similar to the target, it's not possible to run this in docker unless your build machine is also an arm64. That's what the warning "requested image's platform (linux/arm64) does not match the detected host platform (linux/amd64)" is saying. Instead of docker, you need to use virtualisation, e.g. qemu.
You can bring up a qemu environment for arm64 with make qemu_aarch64_virt_defconfig. Check out board/qemu/aarch64-virt/readme.txt for how to start qemu.

Docker image with imported volume runs on Docker for mac, fails with "exec format error" not Docker CE on Linux

We're building an OS image using yocto on Debian which outputs a bzipped volume which we can use as a base image in docker using docker import and we push this image to our registry to use as a base image.
cp build/tmp/deploy/images/raspberrypi4/device.tar.bz2 .
docker import device.tar.bz2 registry/base_image
docker push registry/base_image
We include the base image as part of another docker image:
FROM registry/base_image
ADD target/app.jar app.jar
ADD docker-run.sh run.sh
ENTRYPOINT "./run.sh"
This image is then successfully built by our CI on a linux (Amazon Linux 2) agent, and pushed to the registry. I'm able to pull the image and run it on a Mac with the current version of Docker for Mac.
However, trying to run the same docker image on a linux machine (even on the same linux build agent) results in the following exec format error:
standard_init_linux.go:228: exec user process caused: exec format error
Using an alternative docker image as the base allows the entrypoint to execute, so I'm pretty sure the issue is related to our custom base image.
As docker is largely cross platform, I'm surprised it works on MacOS (intel and m1) but not Linux (tested in Ubuntu and Amazon Linux). I've tried both the Ubuntu and Docker hosted apt repositories for the docker install.
How can I further debug?
The issue here was that the base image was arm based, and that Docker for Mac can run arm images out of the box, even on intel machines.
https://docs.docker.com/desktop/multi-arch/
Docker Desktop provides binfmt_misc multi-architecture support, which means you can run containers for different Linux architectures such as arm, mips, ppc64le, and even s390x.
There's a good write up here for running arm docker images on linux x86 hosts
https://matchboxdorry.gitbooks.io/matchboxblog/content/blogs/build_and_run_arm_images.html
After installing QEMU on your host OS, you need to mount the QEMU binary:
docker run -it --name your-container-name -v /usr/bin/qemu-arm-static:/usr/bin/qemu-arm-static your-arm-image

DPDK Compilation fails inside WSL2 Docker Desktop's containers

I can't compile DPDK inside a docker container, running under WSL2 as VM (and windows 10 as the host machine).
Background
Trying to compile DPDK locally inside a wsl-container some DPDK lib that used to be built on remote native linux machines.
The Dockerfile running the compilation had installed kernel headers, GNU toolchain and other various dependencies. The distribution is CentOS7.
The containers are managed by Docker Desktop
Versions are useless information here.
The Problem
Similar problems across DPDK versions.
In DPDK 20.11, using the meason build-system, the file kernel/linux/meason.build:
../kernel/linux/meson.build:23:1: ERROR: Problem encountered: Cannot compile kernel modules as requested - are kernel headers installed?
If I compile different DPDK versions of DPDK or building using other build-systems (makefiles), I am getting variants of the same error.
Inside /lib/modules There is no entry with WSL2 "uname -r" output.
Although WSL2 has /lib/modules/5.4.72-microsoft-standard-WSL2 (as a softlink), this soft link does not appear in the container.
The solution is adding this line to the Dockerfile*:
RUN ln -s /lib/modules/$(ls /lib/modules/) /lib/modules/$(uname -r)
*(This assumes only one entry is found on /lib/modules and that /usr/src/kernels exists for that entry.)
Another solution (I didn't test) is to run the container:
docker run --name test -v /usr/src/kernels:/usr/src/kernels and -v /lib/modules/:lib/modules -dt image-name
Assuming your host has installed kernel-headers and that they can be found (i.e., /usr/src/kernels/XXX/ is found and /lib/modules/XXX/build softlink is not broken)

How to remove/install a docker image on an unconfigured Docker for centos 7

Using Centos 6.6 and 7 and deciding to move to centos 7 as there are some issues using docker with centos 6.6 (reboot issues for me) and i'm trying to pull the current centos image from docker. (should just be docker pull centos)
However because i already had a docker image of centos installed on the 6.6 virtual machine, i thought it conflicts with the one im trying to pull on the centos 7. It states that the image (f1b something) is already being used on the system and is causing the download to not go through. Simply going over to the centos 6.6 and trying to remove the images (which would be labeled as none by the way, thus you have to do docker images -a),even with force, does nothing. The only solution so far to this is to do a full removal of docker and its dependencies, and reinstall it which should come package free.
Of course this is not the solution i want. One of two things can happen. Either a way to make the two of them to coexist, or a way to remove the current one without removing any other current images. Or if I am not getting this right, take an entirely different approach.
EDIT+1: Ok heres the actual error im receiving when doing the the docker pull...
f1b...: download complete
f1b...:error downloading dependant layers
c85...:Downloading [>
7322...: Error pulling image (latest) from docker.io/centos, endpoint :https://registry-1.docker.io/v1,Dr
7322...:Error pulling image (latest) from docker.io/centos, Driver devicemapper failed to create image rootfs
FATA[0012] Error pulling image (latest) from docker.io/centos, Driver device mapper failed to create image rootfs f1b...:error running DeviceCreate (createSnapDevice) dm_task_run failed
And looking over the problem more im not so sure if its because of the centos 6.6 like i had initially thought despite sharing the same ID's.
EDIT +2: Stranger still is that the fatal error codes keeps changing (im assuming those are FATA[0012]?)
http://docker-sean.readthedocs.org/en/latest/chapter1.html
Theres a config file that needs to be changed for centos 7 docker users which happened to be applying the following change
OPTIONS='-g /docker/data -p /var/run/docker.pid'
in the vim/vi file of /etc/sysconfig/docker.
I swear docker is going to be the death of me...
EDIT +1: Ok lets remap the solution to the following starting from a new centos 7 machine...
yum install docker
service docker start
docker pull centos
ERROR
systemctl enable docker.service
ERROR?
sudo systemctl enable docker.service
systemctl start docker.service
ERROR?
yum remove docker
yum install epel-release.noarch
yum install docker-io
vim /etc/sysconfig/docker
OPTIONS='-g /docker/data -p /var/run/docker.pid'
service docker restart
docker pull centos
and thats how i got docker to work on the new VM if i mapped it correctly.
Also one of the commands i might have used was a thin_check. Somebody used it to verify docker in this link
EDIT +2:
Oh wow, this would explain even better whats happening here. See, the docker server can be installed straight out of the box with centos 7, however the daemon must still be installed from epel. As a reminder, the daemon is the item that actually runs the docker service. The server just allows docker to connect to the internet and view its repositories. Link is right here.

Resources