One of the IoT Edge Module is in Backoff state Raspberry Pi 4 with Raspbian OS - azure

I have developed a module and built the image for arm64v8 architecture as my Edge device is running in Raspberry Pi 4. I got the file deployment.arm64v8.json in the config folder correctly. But when I right-click on the device in Visual Studio Code and select Create Deployment for Single Device, the modules are getting added, but one of the modules is showing Backoff state. What could be the problem here, and was strictly following this doc.
I also tried restarting the services.
Device Information
Host OS: Raspberry OS
Architecture: Arm64v8
Container OS: Linux containers
Runtime Versions
iotedged: iotedge 1.0.9.4
Docker/Moby [run docker version]:
Update:
I am trying to build arm32 image in my 64 bit Windows Dev Machine, I guess that is the reason why I am getting this issue. Now I have 3 options.
Install the 64 bit version of Raspberry OS from here
Set up a 32 bit virtual machine and use it as a dev machine and
build 32 bit images
I already have a WSL running, maybe running the Visual Studio code
solution there?
Could you please tell me what would be the better way?

There were a couple of issues where I was doing wrong. First thing is that I was trying to build an arm64 image in my 64 bit Windows Dev Machine and then deploy the image to the arm32 Raspbian OS, which will never work. You can see the version and other details by running the below commands.
If it says aarch64 then it is 64 bit. If it says armv7l then it is 32 bit. In my case, it is arm71. So now I had to build an arm32 container images on my 64 bit Windows Host machine and use it on my Raspberry Pi 4. According to this doc, it is definitely possible.
You can build ARM32 and ARM64 images on x64 machines, but you will not
be able to run them
Running was not my problem, as I just had to build the image and I will use it in my Raspberry Pi. To make it work, I had to change my Dockerfile.arm32v7, specifically the first line where we pull the base image.
FROM mcr.microsoft.com/dotnet/core/sdk:3.1-buster AS build-env
WORKDIR /app
COPY *.csproj ./
RUN dotnet restore
COPY . ./
RUN dotnet publish -c Release -o out
FROM mcr.microsoft.com/dotnet/core/runtime:3.1-buster-slim-arm32v7
WORKDIR /app
COPY --from=build-env /app/out ./
RUN useradd -ms /bin/bash moduleuser
USER moduleuser
ENTRYPOINT ["dotnet", "SendTelemetry.dll"]
The "build-env" image should be the same architecture as the host OS, the final image should be the target OS architecture. Once I made the changes to the docker file, I changed the version in the module.json file inside my module folder so that the new image with a new tag will be added to the Container Registry when I use the option Build and Push IoT Edge Solution after right-clicking deployment.template.json, and then I used Create Deployment for Single Device option after right-clicking on the device name in Visual Studio Code. And then when I monitor the device (Start Monitoring Built-in Event Endpoint option), I am getting this output.
Support with Microsoft was really cool with this issue. They really helped me to solve this GitHub issue that I had posted.

Related

qemu: uncaught target signal 11 (Segmentation fault) - core dumped in docker container after changing to an M1 mac

A previously working (on linux) dockerized project builds okay on my new M1 mac, but fails while running with this error:
qemu: uncaught target signal 11 (Segmentation fault) - core dumped
I know this is due to a different architecture (mac is arm, linux machine was amd) but don't know how to change my project to work. How can I move forward?
My base image is mhart/alpine-node:16, and I am running a Node JS (TypeScript) application.
What I have tried (and thus why this is not a duplicate) - having read many similar threads - but hasn't helped:
building for amd/64 (linux/intel arch) which Mac should then virtualise, but this didn't change much.
adding a command to the Dockerfile to update qemu RUN apk add --update qemu-x86_64
updating base alpine node image to the latest version
enabled experimental features in docker desktop
from docker desktop I can see images are emulated arm64 architecture. I removed the 'from arm64' platform specifier in my Dockerfile and similar platform override in the docker compose file, I can then build an app image which runs without that amd64 tag. However it then still gets the same issue and fails.
trying with a plain node (node14) base image
Sharing in case it helps anyone else and saves some hours:
In the end what worked was changing my base image from mhart/alpine-node:16 to a non-alpine based image node:16.18.1-alpine3.15.
There are a section of node images tagged for the M1 (arm64) architecture.
I first tried with the latest 19.1.0-alpine which resolved the above qemu failure but wasn't compatible with my application, so selected a v16 version which was - and problem solved.
If in a similar situation I recommend trying to find any newer images that may have been built with arm64 architectures in mind, perhaps even changing (as in my case) your base image and adjusting the Dockerfile (with the delta of what's missing) to make it work.

Building docker image FROM scratch using a buildroot linux

Okay, so this is a complicated topic, so a thanks to anyone who actually takes the time to read this. This all started by trying to create an executable from a python script to run on target arch.
The target arch is arm64. I am doing all of this on a MAC. The major gotcha is that the target device uses uclibc. if it used glibc or musl I would be able to cross compile using the ubuntu container described below or an alpine container with python. (using pyinstaller to create executable)
I created a buildx container and ran an ubuntu container on arm64 architecture (confirmed). From there I am using a tool called Buildroot from within the ubunutu container to create a custom linux filesystem. which after much waiting creates "rootfs.tar"
Okay now with all that non docker stuff out of the way. I copy this rootfs.tar file to my host and try to build an image to run my custom linux.
Docker file
FROM scratch
MAINTAINER peskyadmin
ADD rootfs.tar /
build command
docker buildx build -t "${DOCKER_USER}/testrtfs:latest" --platform linux/arm64 --push .
run command
docker run --privileged --entrypoint "/bin/sh" -it "$DOCKER_USER/testrtfs:latest" --platform linux/arm64
run output
WARNING: The requested image's platform (linux/arm64) does not match the detected host platform (linux/amd64) and no specific platform was requested
standard_init_linux.go:228: exec user process caused: exec format error
Using the latest version of Docker Desktop. I don't think that the warning is an issue because when I run the ubunutu container created with buildx it shows the same error message, but is running on target arch
My question is what am I doing wrong? I do not understand this error. my gut is telling me the issue has to do with the dockerfile but I am not sure as it could be an issue when using buildroot to create the rootfs.tar?
The target cpu is a cortex A53 which is the same that is in the raspberry pi 3. I suppose that I could try to install the image directly onto the bare metal pi and then try to cross compile on there. But I really would like to keep everything virtualized on my mac.
There is no need for any containers. Buildroot (and other build systems) do cross compiling, which means you can build for a different target than the machine you build on.
In other words, you simply select arm64 as the target architecture, make the build, then install the resulting rootfs on your target.
However, this rootfs completely replaces the target rootfs, so it's not relevant that the target is uclibc. So my guess is that you want to install just a single executable. Doing that is made more difficult with shared libraries, because you need to copy not just the executable, but also any libraries it links with. So it may help to configure Buildroot to link statically (BR2_STATIC_LIBS).
-EDIT-
If you want to run an environment similar to the target, it's not possible to run this in docker unless your build machine is also an arm64. That's what the warning "requested image's platform (linux/arm64) does not match the detected host platform (linux/amd64)" is saying. Instead of docker, you need to use virtualisation, e.g. qemu.
You can bring up a qemu environment for arm64 with make qemu_aarch64_virt_defconfig. Check out board/qemu/aarch64-virt/readme.txt for how to start qemu.

Node and CPU architecture

I have a node app that is going to run on a small touch screen device that has an ARM CPU. The app itself is pretty simple. I reads data from syslog and sends an ipc message to another process if it finds a log entry with some specific data.
My concern is whether or not there will be any issues with installing the npm dependencies on a build machine which is running on a different architecture and then copying it onto the ARM device. The build machine is likely to be a 64 bit Mac or Linux box.
The app seems to work fine when I run npm install on my mac and then copy the resulting node_modules folder onto the ARM device. However, I had written electron apps for this same ARM device that required us to use electron-packager with a target architecture of
--platform=linux --arch=armv7l
for it to run. Simply installing the node_modules on a mac then copying them over did not work in that case.
So what is the difference? Is it just the use of electron itself that requires the platform specific build or is it something else I might run into with this new app I'm writing?
You can find platform specific file by executing:
find node_modules -name "*.node" |xargs file

Can run ARM/rpi images in Docker on Windows but not linux

I'm able to run the ARM images (eg. hypriot/rpi-node) in Docker on Windows (64bit), but in all linux x86/64 machines I've tried (Debian, CoreOS, Alpine etc) I get the following error - which makes sense to me but I dont get why it'd run in Docker on Windows then, and I wonder whether I'm missing some opportunity to use an x86 machine as a build server for ARM images (ie. the in google/aws cloud/azure). Any ideas how I might be able to?
docker run -ti hypriot/rpi-node ls
standard_init_linux.go:175: exec user process caused "exec format error"
Docker for windows (and docker for mac) both use a linux vm to host containers. However, the difference between the linux vm they use and your linux machines is the fact that their VM has a kernel system called binfmt_misc setup to call qemu whenever it encounters a binary for a foreign architecture (https://github.com/linuxkit/linuxkit/blob/1c552f7a9db7f0660d3c83362d241e54142323ca/pkg/binfmt/etc/binfmt.d/00_linuxkit.conf )
If you were to configure your linux machine appropriately, it could be used as a build server for ARM images. Google qemu-user-static for some ideas of how to set it up.
Note that the linuxkit vm uses the 'F' flag which doesn't seem to be standard when configuring a typical linux environment. Without it, you need to put the qemu binary inside the container. I'm not sure why it isn't standard practice to use 'F' in more places (there does seem to be a debian bug to do so https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=868030 )
On Windows and Mac docker works under Linux VM. So, I think, that for your container under Windows started ARM Linux VM. But under native Linux used native architecture.
The "exec format error" confirms that you are not running your docker image on the correct architecture.
I had this error trying to run a x86 docker image on a Raspberry Pi 2 (Which works with an ARM architecture). I am pretty sure it might be the same error when you do it the other way round.
So, as Kulti said, Windows/MAC must have started an ARM Linux VM.
If you wish to work with ARM docker images on Linux, you may want to try running a linux docker VM manually. I think you can do it using "docker-machine" even on linux : Docker documentation for docker-machine. (Haven't done it myself so I am not sure)
Hope this helps.
Docker on Windows uses a Linux VM which has been configured such that it can run images of other architectures through Qemu user mode emulation. You can configure native linux in a similar way and it too will then run ARM images. There is a well written three part series that describes it all in detail
Main thing to take away from Part#1 is that any file on Linux is executed through an interpreter (even binary files). The choice of interpreter is configurable, through binfmt_misc, based on byte patterns at the beginning of file or filename extension etc.
Part#2 builds on Part#1 to show how to configure Linux kernel (installed on any architecture) to interpret ARM binaries using Qemu User Emulation.
Finally Part#3 shows how to apply the same trick this time to a linux setup in a docker container which means that linux docker container (which could be for any architecture) will be able to execute ARM binaries.
Important thing to note here is that there is nothing special about docker implementation or containerization that allows docker on Windows to be able to execute ARM binaries. Instead any Linux setup (whether on bare metal or in a container) can be configured to execute ARM binaries through Qemu's user mode emulation of an ARM cpu.
I know this post is old but I will post my solution here in case someone came here through Google.
This happen because your Docker host is not able to run images with AMR architecture. To be enable this in your Docker just run:
docker run --rm --privileged hypriot/qemu-register
More info you can find on this post.
You need the kernel configured for qemu's binfmt_misc module, and the container needs to have the static binaries used by qemu available inside the container filesystem.
You can load the files on the host with the hyperiot/qemu-register image, however I prefer the distribution vendor packages when available (ensures that I get patches when I update). For Debian, the imporant packages is qemu-user-static which you can install as root with:
apt-get update && apt-get install qemu-user-static
Ensure the kernel module is loaded (as root):
modprobe binfmt_misc
Then when running the container, you can mount the static qemu binaries into your container rather than packaging them inside your image, e.g. for the arm arch:
docker run -it --rm \
-v /usr/bin/qemu-arm-static:/usr/bin/qemu-arm-static:ro \
hypriot/rpi-node /bin/sh
Docker includes binfmt_misc in the embedded Linux VM's used on Docker for Desktop, and there appears to be some additional functionality to avoid the need to manually mount the static qemu files inside the container.

DNU publish to linux runtime from windows OS

I want to publish using dnu to run on linux from a windows machine. This is required to make docker images, I know the usual practice is to push the source to linux docker and do "dnu restore", but that sounds a lengthy process, and completely against the cross-compat that the DNXCore50 is trying to offer.
The latest dnx runtime now includes a "runtime" for unix/darwin related packages to target the other operating systems. But how to run a publish command that targets linux or rather if there is a way to pull the linux dnx core in a windows machine using dnvm install coreclr??

Resources