DPDK Compilation fails inside WSL2 Docker Desktop's containers - wsl-2

I can't compile DPDK inside a docker container, running under WSL2 as VM (and windows 10 as the host machine).
Background
Trying to compile DPDK locally inside a wsl-container some DPDK lib that used to be built on remote native linux machines.
The Dockerfile running the compilation had installed kernel headers, GNU toolchain and other various dependencies. The distribution is CentOS7.
The containers are managed by Docker Desktop
Versions are useless information here.
The Problem
Similar problems across DPDK versions.
In DPDK 20.11, using the meason build-system, the file kernel/linux/meason.build:
../kernel/linux/meson.build:23:1: ERROR: Problem encountered: Cannot compile kernel modules as requested - are kernel headers installed?
If I compile different DPDK versions of DPDK or building using other build-systems (makefiles), I am getting variants of the same error.

Inside /lib/modules There is no entry with WSL2 "uname -r" output.
Although WSL2 has /lib/modules/5.4.72-microsoft-standard-WSL2 (as a softlink), this soft link does not appear in the container.
The solution is adding this line to the Dockerfile*:
RUN ln -s /lib/modules/$(ls /lib/modules/) /lib/modules/$(uname -r)
*(This assumes only one entry is found on /lib/modules and that /usr/src/kernels exists for that entry.)
Another solution (I didn't test) is to run the container:
docker run --name test -v /usr/src/kernels:/usr/src/kernels and -v /lib/modules/:lib/modules -dt image-name
Assuming your host has installed kernel-headers and that they can be found (i.e., /usr/src/kernels/XXX/ is found and /lib/modules/XXX/build softlink is not broken)

Related

Building docker image FROM scratch using a buildroot linux

Okay, so this is a complicated topic, so a thanks to anyone who actually takes the time to read this. This all started by trying to create an executable from a python script to run on target arch.
The target arch is arm64. I am doing all of this on a MAC. The major gotcha is that the target device uses uclibc. if it used glibc or musl I would be able to cross compile using the ubuntu container described below or an alpine container with python. (using pyinstaller to create executable)
I created a buildx container and ran an ubuntu container on arm64 architecture (confirmed). From there I am using a tool called Buildroot from within the ubunutu container to create a custom linux filesystem. which after much waiting creates "rootfs.tar"
Okay now with all that non docker stuff out of the way. I copy this rootfs.tar file to my host and try to build an image to run my custom linux.
Docker file
FROM scratch
MAINTAINER peskyadmin
ADD rootfs.tar /
build command
docker buildx build -t "${DOCKER_USER}/testrtfs:latest" --platform linux/arm64 --push .
run command
docker run --privileged --entrypoint "/bin/sh" -it "$DOCKER_USER/testrtfs:latest" --platform linux/arm64
run output
WARNING: The requested image's platform (linux/arm64) does not match the detected host platform (linux/amd64) and no specific platform was requested
standard_init_linux.go:228: exec user process caused: exec format error
Using the latest version of Docker Desktop. I don't think that the warning is an issue because when I run the ubunutu container created with buildx it shows the same error message, but is running on target arch
My question is what am I doing wrong? I do not understand this error. my gut is telling me the issue has to do with the dockerfile but I am not sure as it could be an issue when using buildroot to create the rootfs.tar?
The target cpu is a cortex A53 which is the same that is in the raspberry pi 3. I suppose that I could try to install the image directly onto the bare metal pi and then try to cross compile on there. But I really would like to keep everything virtualized on my mac.
There is no need for any containers. Buildroot (and other build systems) do cross compiling, which means you can build for a different target than the machine you build on.
In other words, you simply select arm64 as the target architecture, make the build, then install the resulting rootfs on your target.
However, this rootfs completely replaces the target rootfs, so it's not relevant that the target is uclibc. So my guess is that you want to install just a single executable. Doing that is made more difficult with shared libraries, because you need to copy not just the executable, but also any libraries it links with. So it may help to configure Buildroot to link statically (BR2_STATIC_LIBS).
-EDIT-
If you want to run an environment similar to the target, it's not possible to run this in docker unless your build machine is also an arm64. That's what the warning "requested image's platform (linux/arm64) does not match the detected host platform (linux/amd64)" is saying. Instead of docker, you need to use virtualisation, e.g. qemu.
You can bring up a qemu environment for arm64 with make qemu_aarch64_virt_defconfig. Check out board/qemu/aarch64-virt/readme.txt for how to start qemu.

Can't execute .run file in Ubuntu container

there.
I'm trying to install the Microchip XC8 compiler on a Ubuntu container to make a pipeline for building the project with Gitlab CI. But there's no response after I run the "xc8-v1.45-full-install-linux-installer.run" file.
Here is the environment I have:
Official Ubuntu 18.04 LTS image on a Docker container
Docker version 19.03.13
Windows 10 as Docker host
Microchip XC8 v1.45 compiler
And the commands I used for downloading and installing are as following:
# Download XC8 from the Microchip official site
wget http://ww1.microchip.com/downloads/en/DeviceDoc/xc8-v1.45-full-install-linux-installer.run
# Change the access permission
chmod +x xc8-v1.45-full-install-linux-installer.run
# Execute the ".run" file
./xc8-v1.45-full-install-linux-installer.run
After I did them all, there's no response. Obviously, something went wrong.
I have tried the installation process above on a native Ubuntu computer, and it just works fine.
Is there any prerequisite I missed? Or there have some ways for me to achieve the same purpose?
Thanks!
I was having this problem on 64 bit Ubuntu 20.04 as well.
I had several problems, could not change execution bit because it was on an NTFS partition and the executable required 32 bit libraries to run.
First I had to move the file from an NTFS partition so that I could set the file to executable. In my case I moved it to my downloads directory and then in that folder executed:
sudo chmod +x ./xc8-v1.42-full-install-linux-installer.run
It still would not run, so I checked its type by executing:
file ./xc8-v1.42-full-install-linux-installer.run
which resulted in the response:
./xc8-v1.42-full-install-linux-installer.run: ELF 32-bit LSB executable, Intel 80386, version 1 (GNU/Linux), statically linked, no section header
Eventually the main solutions was to install 32 bit libraries:
sudo apt-get install lib32z1
Finally I could install install that 32 bit library. Then running this worked:
sudo ./xc8-v1.42-full-install-linux-installer.run
Using existing MPLAB Docker repo
This GitLab.com project exists:
MPLAB X IDE/IPE podman/docker container
This may not help with your .run file problem, but maybe switching to an existing docker container might make it easier for you.
They also work with .run files, so you may find your solution over there as well.
Features:
Has a general-purpose installation of MPLAB X and the toolchain.
X11 forwarding for working in the IDE is supported.
Can use USB from inside the container.
Requires some setup. See readme for installation instructions.
MIT License
Need to test this still myself, but just wanted to share here, might just as well.
Posted in the microchip forums by the creator:
Dockerfile for MPLAB X IDE/IPE and toolchains

Which commands of the defined Linux Distribution are available in a Docker container?

I'm new to docker and understand that the linux kernel is shared between the host-os and the containers. But I don't really understand how deep docker emulates a specific linux-distribution. Lets say we have a simple docker file like this:
FROM ubuntu:16.10
RUN apt-get install nginx
It will give me a docker container with nginx installed in an Ubuntu 16.10 environment. So I should be able to use apt-get as default package manager of Ubuntu. But how deep is this? Can I assume that typical commands of those distribution like lsb_release are emulated like in a full VM with Ubuntu 16.10 installed?
The reason behind my question is that linux distributions are different. I need to know which commands are avaliable, for example when I run a container with Ubuntu 16.10 like the one above on a host which a different distribution installed (like Red Hat, CentOS etc).
A Ubuntu image in Docker is about 150 MB. So I think there are not all tools included like in a real installation. But how can I know on which I can desert that they're there.
Base OS images for Docker are deliberately stripped down, and for Ubuntu they are removing more commands with each new release. The image is meant as the base for a dedicated application to run, you wouldn't typically connect to the container and run commands inside it, and a smaller image is easier to move around and has a smaller attack vector.
There isn't a list of commands in each image version that I know of, you'll only know by building your image. But when images are tagged you can assume a future minor update will not break downstream images - a good argument for explicitly specifying a tag in your Dockerfile.
E.g, this Dockerfile builds correctly:
FROM ubuntu:trusty
RUN ping -c 1 127.0.0.1
This one fails:
FROM ubuntu:xenial
RUN ping -c 1 127.0.0.1
That's because ping was removed from the image for the xenial release. If you just used FROM ubuntu then the same Dockerfile would have built correctly when trusty was the latest tag and then failed when it was replaced by xenial.
A container is presenting you with the same software environment as the non-containerized distribution. It may not have (in fact, probably does not have) all the same packages installed by default, but you can install whatever you need using the appropriate package manager. The availability of software in the container has nothing to do with the distribution running on your host (the Ubuntu image will be the same regardless of whether your are running Docker under CentOS, Fedora, Ubuntu, Arch, etc).
If you require certain commands to be available, just ensure that they are installed in your Dockerfile.
One of the few things that works differently inside a container is that there is typically no service management process running (like init or systemd or whatever), so you cannot start services the same way you can on the host without a little bit of work.

Can run ARM/rpi images in Docker on Windows but not linux

I'm able to run the ARM images (eg. hypriot/rpi-node) in Docker on Windows (64bit), but in all linux x86/64 machines I've tried (Debian, CoreOS, Alpine etc) I get the following error - which makes sense to me but I dont get why it'd run in Docker on Windows then, and I wonder whether I'm missing some opportunity to use an x86 machine as a build server for ARM images (ie. the in google/aws cloud/azure). Any ideas how I might be able to?
docker run -ti hypriot/rpi-node ls
standard_init_linux.go:175: exec user process caused "exec format error"
Docker for windows (and docker for mac) both use a linux vm to host containers. However, the difference between the linux vm they use and your linux machines is the fact that their VM has a kernel system called binfmt_misc setup to call qemu whenever it encounters a binary for a foreign architecture (https://github.com/linuxkit/linuxkit/blob/1c552f7a9db7f0660d3c83362d241e54142323ca/pkg/binfmt/etc/binfmt.d/00_linuxkit.conf )
If you were to configure your linux machine appropriately, it could be used as a build server for ARM images. Google qemu-user-static for some ideas of how to set it up.
Note that the linuxkit vm uses the 'F' flag which doesn't seem to be standard when configuring a typical linux environment. Without it, you need to put the qemu binary inside the container. I'm not sure why it isn't standard practice to use 'F' in more places (there does seem to be a debian bug to do so https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=868030 )
On Windows and Mac docker works under Linux VM. So, I think, that for your container under Windows started ARM Linux VM. But under native Linux used native architecture.
The "exec format error" confirms that you are not running your docker image on the correct architecture.
I had this error trying to run a x86 docker image on a Raspberry Pi 2 (Which works with an ARM architecture). I am pretty sure it might be the same error when you do it the other way round.
So, as Kulti said, Windows/MAC must have started an ARM Linux VM.
If you wish to work with ARM docker images on Linux, you may want to try running a linux docker VM manually. I think you can do it using "docker-machine" even on linux : Docker documentation for docker-machine. (Haven't done it myself so I am not sure)
Hope this helps.
Docker on Windows uses a Linux VM which has been configured such that it can run images of other architectures through Qemu user mode emulation. You can configure native linux in a similar way and it too will then run ARM images. There is a well written three part series that describes it all in detail
Main thing to take away from Part#1 is that any file on Linux is executed through an interpreter (even binary files). The choice of interpreter is configurable, through binfmt_misc, based on byte patterns at the beginning of file or filename extension etc.
Part#2 builds on Part#1 to show how to configure Linux kernel (installed on any architecture) to interpret ARM binaries using Qemu User Emulation.
Finally Part#3 shows how to apply the same trick this time to a linux setup in a docker container which means that linux docker container (which could be for any architecture) will be able to execute ARM binaries.
Important thing to note here is that there is nothing special about docker implementation or containerization that allows docker on Windows to be able to execute ARM binaries. Instead any Linux setup (whether on bare metal or in a container) can be configured to execute ARM binaries through Qemu's user mode emulation of an ARM cpu.
I know this post is old but I will post my solution here in case someone came here through Google.
This happen because your Docker host is not able to run images with AMR architecture. To be enable this in your Docker just run:
docker run --rm --privileged hypriot/qemu-register
More info you can find on this post.
You need the kernel configured for qemu's binfmt_misc module, and the container needs to have the static binaries used by qemu available inside the container filesystem.
You can load the files on the host with the hyperiot/qemu-register image, however I prefer the distribution vendor packages when available (ensures that I get patches when I update). For Debian, the imporant packages is qemu-user-static which you can install as root with:
apt-get update && apt-get install qemu-user-static
Ensure the kernel module is loaded (as root):
modprobe binfmt_misc
Then when running the container, you can mount the static qemu binaries into your container rather than packaging them inside your image, e.g. for the arm arch:
docker run -it --rm \
-v /usr/bin/qemu-arm-static:/usr/bin/qemu-arm-static:ro \
hypriot/rpi-node /bin/sh
Docker includes binfmt_misc in the embedded Linux VM's used on Docker for Desktop, and there appears to be some additional functionality to avoid the need to manually mount the static qemu files inside the container.

Emulating Linux binaries under Mac OS X

How do I run Linux binaries under Mac OS X?
Googling around I found a couple of emulators but none for running Linux binaries on a Mac. There are quite a few posts about running Mac OS X on Linux and that kind of stuff - but that's the opposite of what I want to do.
Update:
Thanks for all the answers! I am fully aware of MacPorts and Fink or any of the other things; and no, I do not want any of these utilities, and I do not want any of the package managers, I prefer to compile things myself. I also have Parallels and could set up virtual machines and all that jazz...
The only thing I want to do is to find a way to run a binary that I do not have the source code for and has been compiled for Linux, but I do not want to run it under Linux but under Mac OS X. Therefore my question about emulators.
Well there is a project introducing something like Linux's binfmt_misc to OS X so now what you need is an ELF loader, a dynamic linker that can load both Mach-O and ELF, and some mechanism to translate Linux calls to OS X ones.
Just for inspiration, you can implement the dynamic linker in the fashion that it ignores filename extension - both libfoo.so.1 (as an Linux ELF) and libfoo.1.dylib (as an Mach-O) can be loaded so that OS X versions of system libraries can be reused so that you do not need to write a "hosted on OS X" libc.so and syscalls can be handled by an kext that translates Linux calls to OS X ones in kernel.
Or, in an more elegant way, implement a stripped down Linux kernel as a kext that makes the OS X kernel a dual-purpose. However that will require you to use two sets of libraries. (Binaries do not clash so it is largely okay)
Set up a virtual machine (I personally use VMWare Fusion) and then install whatever distro of Linux you desire on the virtual machine.
Or, if you have the source to the Linux program, chances are you can recompile it on a Mac and run it natively. If you install Fink or MacPorts, you can install a lot of open source programs without much trouble.
I recently found Noah, which you can use to run Linux binaries on macOS. You can install Noah via homebrew (brew install linux-noah/noah/noah). Then you should be able to do this:
noah linux_binary
In my experience the behavior of the binary matches what I see on my Ubuntu machine.
You might have some luck with running Linux executables under Mac OS X using Qemu's User Space Emulator
If you decide to go the virtualization route, consider also VirtualBox.
Also, if you only need UNIX like command line tools, there is the MacPorts project. This is basically how I set up git on my mac: after having installed MacPorts you just have to run the sudo port install git command to install git on your system.
noah does not allow the binaries to execute properly for me. Use Docker Desktop for Mac.
Just do:
docker pull centos:latest # 73MB CentOS docker image
Make a folder for what is needed to run your binary, and in your Dockerfile:
FROM centos
COPY your_binary /bin/
ENTRYPOINT ["your_binary"]
and you can build it with
docker build -t image_name
then execute with
docker run image_name as if it were the binary itself. Worked for me. Hope it helps someone else. And if you need specific outputs or to store files somewhere you can mount volumes onto the docker with -v, for example:
docker run -v path_to_my_stuff:/docker_stuff image_name,
though adding a WORKDIR /docker_stuff line to the Dockerfile before ENTRYPOINT is probably best.
If you change ENTRYPOINT to
ENTRYPOINT ["bash", "-c"]
and add
CMD ["your_binary"]
underneath it, you can actually pass the command into the image like
docker run -v path_on_local:/in_container_path image_name "your_binary some_parameters -optionrequiringzerowhitespacebeforeinputvalue"

Resources