qemu: uncaught target signal 11 (Segmentation fault) - core dumped in docker container after changing to an M1 mac - node.js

A previously working (on linux) dockerized project builds okay on my new M1 mac, but fails while running with this error:
qemu: uncaught target signal 11 (Segmentation fault) - core dumped
I know this is due to a different architecture (mac is arm, linux machine was amd) but don't know how to change my project to work. How can I move forward?
My base image is mhart/alpine-node:16, and I am running a Node JS (TypeScript) application.
What I have tried (and thus why this is not a duplicate) - having read many similar threads - but hasn't helped:
building for amd/64 (linux/intel arch) which Mac should then virtualise, but this didn't change much.
adding a command to the Dockerfile to update qemu RUN apk add --update qemu-x86_64
updating base alpine node image to the latest version
enabled experimental features in docker desktop
from docker desktop I can see images are emulated arm64 architecture. I removed the 'from arm64' platform specifier in my Dockerfile and similar platform override in the docker compose file, I can then build an app image which runs without that amd64 tag. However it then still gets the same issue and fails.
trying with a plain node (node14) base image

Sharing in case it helps anyone else and saves some hours:
In the end what worked was changing my base image from mhart/alpine-node:16 to a non-alpine based image node:16.18.1-alpine3.15.
There are a section of node images tagged for the M1 (arm64) architecture.
I first tried with the latest 19.1.0-alpine which resolved the above qemu failure but wasn't compatible with my application, so selected a v16 version which was - and problem solved.
If in a similar situation I recommend trying to find any newer images that may have been built with arm64 architectures in mind, perhaps even changing (as in my case) your base image and adjusting the Dockerfile (with the delta of what's missing) to make it work.

Related

Building docker image FROM scratch using a buildroot linux

Okay, so this is a complicated topic, so a thanks to anyone who actually takes the time to read this. This all started by trying to create an executable from a python script to run on target arch.
The target arch is arm64. I am doing all of this on a MAC. The major gotcha is that the target device uses uclibc. if it used glibc or musl I would be able to cross compile using the ubuntu container described below or an alpine container with python. (using pyinstaller to create executable)
I created a buildx container and ran an ubuntu container on arm64 architecture (confirmed). From there I am using a tool called Buildroot from within the ubunutu container to create a custom linux filesystem. which after much waiting creates "rootfs.tar"
Okay now with all that non docker stuff out of the way. I copy this rootfs.tar file to my host and try to build an image to run my custom linux.
Docker file
FROM scratch
MAINTAINER peskyadmin
ADD rootfs.tar /
build command
docker buildx build -t "${DOCKER_USER}/testrtfs:latest" --platform linux/arm64 --push .
run command
docker run --privileged --entrypoint "/bin/sh" -it "$DOCKER_USER/testrtfs:latest" --platform linux/arm64
run output
WARNING: The requested image's platform (linux/arm64) does not match the detected host platform (linux/amd64) and no specific platform was requested
standard_init_linux.go:228: exec user process caused: exec format error
Using the latest version of Docker Desktop. I don't think that the warning is an issue because when I run the ubunutu container created with buildx it shows the same error message, but is running on target arch
My question is what am I doing wrong? I do not understand this error. my gut is telling me the issue has to do with the dockerfile but I am not sure as it could be an issue when using buildroot to create the rootfs.tar?
The target cpu is a cortex A53 which is the same that is in the raspberry pi 3. I suppose that I could try to install the image directly onto the bare metal pi and then try to cross compile on there. But I really would like to keep everything virtualized on my mac.
There is no need for any containers. Buildroot (and other build systems) do cross compiling, which means you can build for a different target than the machine you build on.
In other words, you simply select arm64 as the target architecture, make the build, then install the resulting rootfs on your target.
However, this rootfs completely replaces the target rootfs, so it's not relevant that the target is uclibc. So my guess is that you want to install just a single executable. Doing that is made more difficult with shared libraries, because you need to copy not just the executable, but also any libraries it links with. So it may help to configure Buildroot to link statically (BR2_STATIC_LIBS).
-EDIT-
If you want to run an environment similar to the target, it's not possible to run this in docker unless your build machine is also an arm64. That's what the warning "requested image's platform (linux/arm64) does not match the detected host platform (linux/amd64)" is saying. Instead of docker, you need to use virtualisation, e.g. qemu.
You can bring up a qemu environment for arm64 with make qemu_aarch64_virt_defconfig. Check out board/qemu/aarch64-virt/readme.txt for how to start qemu.

Building Node.js Binary with `pkg` for ARMv7 / Ubuntu 14 fails when running output with libstdc++.so.6 Error

Cross compile? I have a device my company still manufacturers and deploys world-wide, running Ubuntu 14.04.3 ... on an ARMv7 Processor. I have a node app I'm creating for the product family, and I'd like to run it on this device as well. Tried going the whole nvm route to install-and-run node directly on it, but gyphy fails to build some deps from the project locally on the device. I'd really much rather use pkg to build a binary to deploy to the device.
If you aren't familiar, pkg is: https://www.npmjs.com/package/pkg
However, building the examples/express example from the pkg repo with pkg 4.4.9 like pkg . --targets node10.15.3-linux-armv7 --no-bytecode (on a linux box) and scp'ing the resulting binary over to the IOT device running the armv7 / Ubuntu 14 setup, I get the following error when trying to run the binary:
./express-example: relocation error: ./express-example: symbol
_ZTVNSt7__cxx1115basic_stringbufIcSt11char_traitsIcESaIcEEE,
version GLIBCXX_3.4.21 not defined in file libstdc++.so.6 with link time reference
(Line wraps added to break long line)
Googling the error (specifically with regards to GLIBC and libstdc++.so.6) has gotten me nowhere. I can't figure out if the libstdc++ on the device is too old or too new. Tried updating libstdc++ but it said it was already at the latest version (for that OS.) I've got no clue where to go from here... Is there some way to compile the binary via pkg with different options, or statically link the libraries it needs instead of relying on system libraries?
Also, when I try to use a newer node version (like 10.21.0, etc) - it fails with an "unable to build" message. I know I can crosscompile regular C/C++ code on that linux box for ARM (we do that currently with Jenkins in the cloud on a linux box), so is there a way to get crosscompile working at buildtime?
Here's the error for building with 10.21:
[root#decidr ~/devel/pkg/examples/express]# ./node_modules/.bin/pkg . --targets node10-linux-armv7 --no-bytecode
> pkg#4.4.9
> Fetching base Node.js binaries to PKG_CACHE_PATH
fetched-v10.21.0-linux-armv7 [ ] 0%
> Error! 404 Not Found
https://github.com/zeit/pkg-fetch/releases/download/v2.6/uploaded-v2.6-node-v10.21.0-linux-armv7
> Asset not found by direct link:
{"tag":"v2.6","name":"uploaded-v2.6-node-v10.21.0-linux-armv7"}
> Not found in GitHub releases:
{"tag":"v2.6","name":"uploaded-v2.6-node-v10.21.0-linux-armv7"}
> Building base binary from source:
built-v10.21.0-linux-armv7
> Error! Not able to build for 'armv7' here, only for 'x64'
I find myself rather stuck - can't run node directly on the device, and the device won't run the pkg-built binary, even though it builds ARMv7 code. No idea how to proceed forward - any assistance or ideas? :)

One of the IoT Edge Module is in Backoff state Raspberry Pi 4 with Raspbian OS

I have developed a module and built the image for arm64v8 architecture as my Edge device is running in Raspberry Pi 4. I got the file deployment.arm64v8.json in the config folder correctly. But when I right-click on the device in Visual Studio Code and select Create Deployment for Single Device, the modules are getting added, but one of the modules is showing Backoff state. What could be the problem here, and was strictly following this doc.
I also tried restarting the services.
Device Information
Host OS: Raspberry OS
Architecture: Arm64v8
Container OS: Linux containers
Runtime Versions
iotedged: iotedge 1.0.9.4
Docker/Moby [run docker version]:
Update:
I am trying to build arm32 image in my 64 bit Windows Dev Machine, I guess that is the reason why I am getting this issue. Now I have 3 options.
Install the 64 bit version of Raspberry OS from here
Set up a 32 bit virtual machine and use it as a dev machine and
build 32 bit images
I already have a WSL running, maybe running the Visual Studio code
solution there?
Could you please tell me what would be the better way?
There were a couple of issues where I was doing wrong. First thing is that I was trying to build an arm64 image in my 64 bit Windows Dev Machine and then deploy the image to the arm32 Raspbian OS, which will never work. You can see the version and other details by running the below commands.
If it says aarch64 then it is 64 bit. If it says armv7l then it is 32 bit. In my case, it is arm71. So now I had to build an arm32 container images on my 64 bit Windows Host machine and use it on my Raspberry Pi 4. According to this doc, it is definitely possible.
You can build ARM32 and ARM64 images on x64 machines, but you will not
be able to run them
Running was not my problem, as I just had to build the image and I will use it in my Raspberry Pi. To make it work, I had to change my Dockerfile.arm32v7, specifically the first line where we pull the base image.
FROM mcr.microsoft.com/dotnet/core/sdk:3.1-buster AS build-env
WORKDIR /app
COPY *.csproj ./
RUN dotnet restore
COPY . ./
RUN dotnet publish -c Release -o out
FROM mcr.microsoft.com/dotnet/core/runtime:3.1-buster-slim-arm32v7
WORKDIR /app
COPY --from=build-env /app/out ./
RUN useradd -ms /bin/bash moduleuser
USER moduleuser
ENTRYPOINT ["dotnet", "SendTelemetry.dll"]
The "build-env" image should be the same architecture as the host OS, the final image should be the target OS architecture. Once I made the changes to the docker file, I changed the version in the module.json file inside my module folder so that the new image with a new tag will be added to the Container Registry when I use the option Build and Push IoT Edge Solution after right-clicking deployment.template.json, and then I used Create Deployment for Single Device option after right-clicking on the device name in Visual Studio Code. And then when I monitor the device (Start Monitoring Built-in Event Endpoint option), I am getting this output.
Support with Microsoft was really cool with this issue. They really helped me to solve this GitHub issue that I had posted.

Are there any limitations regarding the age of a linux distribution which can be used to create a docker base-image?

Im wondering if its possible to use very old Linux Distribution like Debian GNU/Linux 3.1 (Sarge) and create a base-image of it to run legacy code not working under "younger" distros.
Only Thing i found about it was somebody successfully using Ubuntu Feisty: Run old Linux release in a Docker container?
Are there any known limitations?
Your host needs to have a minimal version of the Linux kernel, and that version is 3.10
See
Docker minimum kernel version 3.8.13 or 3.10
extract from the previous link
There's also a shell-script to check if your system has the required dependencies in place and to check which features are available;
https://github.com/docker/docker/blob/master/contrib/check-config.sh
So you can use this to check if you will be able to use docker on this host.
From
https://wiki.debian.org/DebianSarge?action=show&redirect=Sarge
I see
kernel : linux 2.4.27 and 2.6.8
So it may not work

Which commands of the defined Linux Distribution are available in a Docker container?

I'm new to docker and understand that the linux kernel is shared between the host-os and the containers. But I don't really understand how deep docker emulates a specific linux-distribution. Lets say we have a simple docker file like this:
FROM ubuntu:16.10
RUN apt-get install nginx
It will give me a docker container with nginx installed in an Ubuntu 16.10 environment. So I should be able to use apt-get as default package manager of Ubuntu. But how deep is this? Can I assume that typical commands of those distribution like lsb_release are emulated like in a full VM with Ubuntu 16.10 installed?
The reason behind my question is that linux distributions are different. I need to know which commands are avaliable, for example when I run a container with Ubuntu 16.10 like the one above on a host which a different distribution installed (like Red Hat, CentOS etc).
A Ubuntu image in Docker is about 150 MB. So I think there are not all tools included like in a real installation. But how can I know on which I can desert that they're there.
Base OS images for Docker are deliberately stripped down, and for Ubuntu they are removing more commands with each new release. The image is meant as the base for a dedicated application to run, you wouldn't typically connect to the container and run commands inside it, and a smaller image is easier to move around and has a smaller attack vector.
There isn't a list of commands in each image version that I know of, you'll only know by building your image. But when images are tagged you can assume a future minor update will not break downstream images - a good argument for explicitly specifying a tag in your Dockerfile.
E.g, this Dockerfile builds correctly:
FROM ubuntu:trusty
RUN ping -c 1 127.0.0.1
This one fails:
FROM ubuntu:xenial
RUN ping -c 1 127.0.0.1
That's because ping was removed from the image for the xenial release. If you just used FROM ubuntu then the same Dockerfile would have built correctly when trusty was the latest tag and then failed when it was replaced by xenial.
A container is presenting you with the same software environment as the non-containerized distribution. It may not have (in fact, probably does not have) all the same packages installed by default, but you can install whatever you need using the appropriate package manager. The availability of software in the container has nothing to do with the distribution running on your host (the Ubuntu image will be the same regardless of whether your are running Docker under CentOS, Fedora, Ubuntu, Arch, etc).
If you require certain commands to be available, just ensure that they are installed in your Dockerfile.
One of the few things that works differently inside a container is that there is typically no service management process running (like init or systemd or whatever), so you cannot start services the same way you can on the host without a little bit of work.

Resources