Okay, so this is a complicated topic, so a thanks to anyone who actually takes the time to read this. This all started by trying to create an executable from a python script to run on target arch.
The target arch is arm64. I am doing all of this on a MAC. The major gotcha is that the target device uses uclibc. if it used glibc or musl I would be able to cross compile using the ubuntu container described below or an alpine container with python. (using pyinstaller to create executable)
I created a buildx container and ran an ubuntu container on arm64 architecture (confirmed). From there I am using a tool called Buildroot from within the ubunutu container to create a custom linux filesystem. which after much waiting creates "rootfs.tar"
Okay now with all that non docker stuff out of the way. I copy this rootfs.tar file to my host and try to build an image to run my custom linux.
Docker file
FROM scratch
MAINTAINER peskyadmin
ADD rootfs.tar /
build command
docker buildx build -t "${DOCKER_USER}/testrtfs:latest" --platform linux/arm64 --push .
run command
docker run --privileged --entrypoint "/bin/sh" -it "$DOCKER_USER/testrtfs:latest" --platform linux/arm64
run output
WARNING: The requested image's platform (linux/arm64) does not match the detected host platform (linux/amd64) and no specific platform was requested
standard_init_linux.go:228: exec user process caused: exec format error
Using the latest version of Docker Desktop. I don't think that the warning is an issue because when I run the ubunutu container created with buildx it shows the same error message, but is running on target arch
My question is what am I doing wrong? I do not understand this error. my gut is telling me the issue has to do with the dockerfile but I am not sure as it could be an issue when using buildroot to create the rootfs.tar?
The target cpu is a cortex A53 which is the same that is in the raspberry pi 3. I suppose that I could try to install the image directly onto the bare metal pi and then try to cross compile on there. But I really would like to keep everything virtualized on my mac.
There is no need for any containers. Buildroot (and other build systems) do cross compiling, which means you can build for a different target than the machine you build on.
In other words, you simply select arm64 as the target architecture, make the build, then install the resulting rootfs on your target.
However, this rootfs completely replaces the target rootfs, so it's not relevant that the target is uclibc. So my guess is that you want to install just a single executable. Doing that is made more difficult with shared libraries, because you need to copy not just the executable, but also any libraries it links with. So it may help to configure Buildroot to link statically (BR2_STATIC_LIBS).
-EDIT-
If you want to run an environment similar to the target, it's not possible to run this in docker unless your build machine is also an arm64. That's what the warning "requested image's platform (linux/arm64) does not match the detected host platform (linux/amd64)" is saying. Instead of docker, you need to use virtualisation, e.g. qemu.
You can bring up a qemu environment for arm64 with make qemu_aarch64_virt_defconfig. Check out board/qemu/aarch64-virt/readme.txt for how to start qemu.
Related
I am learning building Linux system with buildroot. The configure for qemu+arm is very convenient for me to have a environment to boot Linux system. I can get whole system from the script bellow. And it works fine, Linux system run.
git clone https://git.buildroot.net/buildroot
cd buildroot
git checkout 014ec19dfe
make qemu_arm_versatile_defconfig
make
cd output/images/
./start-qemu.sh
I find there is an zImage for linux kernel. As I know zImage is composed by decompress boot code and compressed linux binary. And I think Image at ../build/linux-5.15.18/arch/arm/boot/Image is the original linux binary. So I try to use Image to replace zImage(I copy the Image to the same folder. Then I change the zImage in start-qemu.sh to Image). But I failed, I run it and only see
VNC server running on ::1:5900
And there are no other log. I try other system like riscv, there is only Image, but it can be run. Is there something I misunderstand for arm linux Image? (By the way, I am curious about why only arm32 has compress code to build zImage, but aarch64 or riscv do not support this function.)
We're building an OS image using yocto on Debian which outputs a bzipped volume which we can use as a base image in docker using docker import and we push this image to our registry to use as a base image.
cp build/tmp/deploy/images/raspberrypi4/device.tar.bz2 .
docker import device.tar.bz2 registry/base_image
docker push registry/base_image
We include the base image as part of another docker image:
FROM registry/base_image
ADD target/app.jar app.jar
ADD docker-run.sh run.sh
ENTRYPOINT "./run.sh"
This image is then successfully built by our CI on a linux (Amazon Linux 2) agent, and pushed to the registry. I'm able to pull the image and run it on a Mac with the current version of Docker for Mac.
However, trying to run the same docker image on a linux machine (even on the same linux build agent) results in the following exec format error:
standard_init_linux.go:228: exec user process caused: exec format error
Using an alternative docker image as the base allows the entrypoint to execute, so I'm pretty sure the issue is related to our custom base image.
As docker is largely cross platform, I'm surprised it works on MacOS (intel and m1) but not Linux (tested in Ubuntu and Amazon Linux). I've tried both the Ubuntu and Docker hosted apt repositories for the docker install.
How can I further debug?
The issue here was that the base image was arm based, and that Docker for Mac can run arm images out of the box, even on intel machines.
https://docs.docker.com/desktop/multi-arch/
Docker Desktop provides binfmt_misc multi-architecture support, which means you can run containers for different Linux architectures such as arm, mips, ppc64le, and even s390x.
There's a good write up here for running arm docker images on linux x86 hosts
https://matchboxdorry.gitbooks.io/matchboxblog/content/blogs/build_and_run_arm_images.html
After installing QEMU on your host OS, you need to mount the QEMU binary:
docker run -it --name your-container-name -v /usr/bin/qemu-arm-static:/usr/bin/qemu-arm-static your-arm-image
I can't compile DPDK inside a docker container, running under WSL2 as VM (and windows 10 as the host machine).
Background
Trying to compile DPDK locally inside a wsl-container some DPDK lib that used to be built on remote native linux machines.
The Dockerfile running the compilation had installed kernel headers, GNU toolchain and other various dependencies. The distribution is CentOS7.
The containers are managed by Docker Desktop
Versions are useless information here.
The Problem
Similar problems across DPDK versions.
In DPDK 20.11, using the meason build-system, the file kernel/linux/meason.build:
../kernel/linux/meson.build:23:1: ERROR: Problem encountered: Cannot compile kernel modules as requested - are kernel headers installed?
If I compile different DPDK versions of DPDK or building using other build-systems (makefiles), I am getting variants of the same error.
Inside /lib/modules There is no entry with WSL2 "uname -r" output.
Although WSL2 has /lib/modules/5.4.72-microsoft-standard-WSL2 (as a softlink), this soft link does not appear in the container.
The solution is adding this line to the Dockerfile*:
RUN ln -s /lib/modules/$(ls /lib/modules/) /lib/modules/$(uname -r)
*(This assumes only one entry is found on /lib/modules and that /usr/src/kernels exists for that entry.)
Another solution (I didn't test) is to run the container:
docker run --name test -v /usr/src/kernels:/usr/src/kernels and -v /lib/modules/:lib/modules -dt image-name
Assuming your host has installed kernel-headers and that they can be found (i.e., /usr/src/kernels/XXX/ is found and /lib/modules/XXX/build softlink is not broken)
I'm new to docker and understand that the linux kernel is shared between the host-os and the containers. But I don't really understand how deep docker emulates a specific linux-distribution. Lets say we have a simple docker file like this:
FROM ubuntu:16.10
RUN apt-get install nginx
It will give me a docker container with nginx installed in an Ubuntu 16.10 environment. So I should be able to use apt-get as default package manager of Ubuntu. But how deep is this? Can I assume that typical commands of those distribution like lsb_release are emulated like in a full VM with Ubuntu 16.10 installed?
The reason behind my question is that linux distributions are different. I need to know which commands are avaliable, for example when I run a container with Ubuntu 16.10 like the one above on a host which a different distribution installed (like Red Hat, CentOS etc).
A Ubuntu image in Docker is about 150 MB. So I think there are not all tools included like in a real installation. But how can I know on which I can desert that they're there.
Base OS images for Docker are deliberately stripped down, and for Ubuntu they are removing more commands with each new release. The image is meant as the base for a dedicated application to run, you wouldn't typically connect to the container and run commands inside it, and a smaller image is easier to move around and has a smaller attack vector.
There isn't a list of commands in each image version that I know of, you'll only know by building your image. But when images are tagged you can assume a future minor update will not break downstream images - a good argument for explicitly specifying a tag in your Dockerfile.
E.g, this Dockerfile builds correctly:
FROM ubuntu:trusty
RUN ping -c 1 127.0.0.1
This one fails:
FROM ubuntu:xenial
RUN ping -c 1 127.0.0.1
That's because ping was removed from the image for the xenial release. If you just used FROM ubuntu then the same Dockerfile would have built correctly when trusty was the latest tag and then failed when it was replaced by xenial.
A container is presenting you with the same software environment as the non-containerized distribution. It may not have (in fact, probably does not have) all the same packages installed by default, but you can install whatever you need using the appropriate package manager. The availability of software in the container has nothing to do with the distribution running on your host (the Ubuntu image will be the same regardless of whether your are running Docker under CentOS, Fedora, Ubuntu, Arch, etc).
If you require certain commands to be available, just ensure that they are installed in your Dockerfile.
One of the few things that works differently inside a container is that there is typically no service management process running (like init or systemd or whatever), so you cannot start services the same way you can on the host without a little bit of work.
I'm able to run the ARM images (eg. hypriot/rpi-node) in Docker on Windows (64bit), but in all linux x86/64 machines I've tried (Debian, CoreOS, Alpine etc) I get the following error - which makes sense to me but I dont get why it'd run in Docker on Windows then, and I wonder whether I'm missing some opportunity to use an x86 machine as a build server for ARM images (ie. the in google/aws cloud/azure). Any ideas how I might be able to?
docker run -ti hypriot/rpi-node ls
standard_init_linux.go:175: exec user process caused "exec format error"
Docker for windows (and docker for mac) both use a linux vm to host containers. However, the difference between the linux vm they use and your linux machines is the fact that their VM has a kernel system called binfmt_misc setup to call qemu whenever it encounters a binary for a foreign architecture (https://github.com/linuxkit/linuxkit/blob/1c552f7a9db7f0660d3c83362d241e54142323ca/pkg/binfmt/etc/binfmt.d/00_linuxkit.conf )
If you were to configure your linux machine appropriately, it could be used as a build server for ARM images. Google qemu-user-static for some ideas of how to set it up.
Note that the linuxkit vm uses the 'F' flag which doesn't seem to be standard when configuring a typical linux environment. Without it, you need to put the qemu binary inside the container. I'm not sure why it isn't standard practice to use 'F' in more places (there does seem to be a debian bug to do so https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=868030 )
On Windows and Mac docker works under Linux VM. So, I think, that for your container under Windows started ARM Linux VM. But under native Linux used native architecture.
The "exec format error" confirms that you are not running your docker image on the correct architecture.
I had this error trying to run a x86 docker image on a Raspberry Pi 2 (Which works with an ARM architecture). I am pretty sure it might be the same error when you do it the other way round.
So, as Kulti said, Windows/MAC must have started an ARM Linux VM.
If you wish to work with ARM docker images on Linux, you may want to try running a linux docker VM manually. I think you can do it using "docker-machine" even on linux : Docker documentation for docker-machine. (Haven't done it myself so I am not sure)
Hope this helps.
Docker on Windows uses a Linux VM which has been configured such that it can run images of other architectures through Qemu user mode emulation. You can configure native linux in a similar way and it too will then run ARM images. There is a well written three part series that describes it all in detail
Main thing to take away from Part#1 is that any file on Linux is executed through an interpreter (even binary files). The choice of interpreter is configurable, through binfmt_misc, based on byte patterns at the beginning of file or filename extension etc.
Part#2 builds on Part#1 to show how to configure Linux kernel (installed on any architecture) to interpret ARM binaries using Qemu User Emulation.
Finally Part#3 shows how to apply the same trick this time to a linux setup in a docker container which means that linux docker container (which could be for any architecture) will be able to execute ARM binaries.
Important thing to note here is that there is nothing special about docker implementation or containerization that allows docker on Windows to be able to execute ARM binaries. Instead any Linux setup (whether on bare metal or in a container) can be configured to execute ARM binaries through Qemu's user mode emulation of an ARM cpu.
I know this post is old but I will post my solution here in case someone came here through Google.
This happen because your Docker host is not able to run images with AMR architecture. To be enable this in your Docker just run:
docker run --rm --privileged hypriot/qemu-register
More info you can find on this post.
You need the kernel configured for qemu's binfmt_misc module, and the container needs to have the static binaries used by qemu available inside the container filesystem.
You can load the files on the host with the hyperiot/qemu-register image, however I prefer the distribution vendor packages when available (ensures that I get patches when I update). For Debian, the imporant packages is qemu-user-static which you can install as root with:
apt-get update && apt-get install qemu-user-static
Ensure the kernel module is loaded (as root):
modprobe binfmt_misc
Then when running the container, you can mount the static qemu binaries into your container rather than packaging them inside your image, e.g. for the arm arch:
docker run -it --rm \
-v /usr/bin/qemu-arm-static:/usr/bin/qemu-arm-static:ro \
hypriot/rpi-node /bin/sh
Docker includes binfmt_misc in the embedded Linux VM's used on Docker for Desktop, and there appears to be some additional functionality to avoid the need to manually mount the static qemu files inside the container.