I'm making a hotfix for AROC on the Chromebook Plux V2 (which has a x86_64 architecture, but no multiarch support) and I want to run a test in his script that checks for it. What command can I use to check for multiarch on a linux x86_64 system?
(Just to reference the original issue) when deploying AROC on that chromebook, the device could not run the i686 busybox binary that the script installs.
The author insists on the i686 binary, because the android containers that he tests deployments on are 32 bit on a host system with multiarch.
My goal is to fix his script and add support for the device I was testing on.
I plan to do this, by checking for multiarch and installing the i686 binary if a 32 bit runtime exists or installing the x86_64 binary if it doesn't. What command can I use to check for multiarch?
you're asking about multilib support, not multiarch.
you can simply check to see if the 32-bit ldso exists: test -e /lib/ld-linux.so.2.
My main computer is running Ubuntu 18,04, I developed an application on ReactJS on FrontEnd, NodeJS on BackEnd and MySQL concerning the database ON beaglebone. .
More information about my BeagleBone :
root#beaglebone:~# uname -a
Linux beaglebone 3.8.13-bone71.1 #162 SMP Fri Oct 16 07:27:34 CST 2015 armv7l GNU/Linux
I want to run my application always at startup on BeagleBone
What can I do to make a script run as soon as it boots up ?
Short answer: Just like on any other device (including PC or Server) that runs a Linux distribution.
Some quick pointers:
Latest BeagleBone Images are Debian 9.4 based
Use an "IoT" image unless you really need the HDMI (or LCD) output and accept the lower performance.
Debian uses systemd to manage automatic starting and stopping of software services
Create a systemd service file that invokes a process you need (e.g. npm) as the desired user (probably 'debian'). There seem to be helper tools like service-systemd
reload systemd systemctl daemon-reload to make it aware of the new file
enable it systemctl enable myfancy.service
Both flavours of mySQL on Debian (mysql-server and mariadb-server) come with a systemd file already.
Im wondering if its possible to use very old Linux Distribution like Debian GNU/Linux 3.1 (Sarge) and create a base-image of it to run legacy code not working under "younger" distros.
Only Thing i found about it was somebody successfully using Ubuntu Feisty: Run old Linux release in a Docker container?
Are there any known limitations?
Your host needs to have a minimal version of the Linux kernel, and that version is 3.10
See
Docker minimum kernel version 3.8.13 or 3.10
extract from the previous link
There's also a shell-script to check if your system has the required dependencies in place and to check which features are available;
https://github.com/docker/docker/blob/master/contrib/check-config.sh
So you can use this to check if you will be able to use docker on this host.
From
https://wiki.debian.org/DebianSarge?action=show&redirect=Sarge
I see
kernel : linux 2.4.27 and 2.6.8
So it may not work
I'm able to run the ARM images (eg. hypriot/rpi-node) in Docker on Windows (64bit), but in all linux x86/64 machines I've tried (Debian, CoreOS, Alpine etc) I get the following error - which makes sense to me but I dont get why it'd run in Docker on Windows then, and I wonder whether I'm missing some opportunity to use an x86 machine as a build server for ARM images (ie. the in google/aws cloud/azure). Any ideas how I might be able to?
docker run -ti hypriot/rpi-node ls
standard_init_linux.go:175: exec user process caused "exec format error"
Docker for windows (and docker for mac) both use a linux vm to host containers. However, the difference between the linux vm they use and your linux machines is the fact that their VM has a kernel system called binfmt_misc setup to call qemu whenever it encounters a binary for a foreign architecture (https://github.com/linuxkit/linuxkit/blob/1c552f7a9db7f0660d3c83362d241e54142323ca/pkg/binfmt/etc/binfmt.d/00_linuxkit.conf )
If you were to configure your linux machine appropriately, it could be used as a build server for ARM images. Google qemu-user-static for some ideas of how to set it up.
Note that the linuxkit vm uses the 'F' flag which doesn't seem to be standard when configuring a typical linux environment. Without it, you need to put the qemu binary inside the container. I'm not sure why it isn't standard practice to use 'F' in more places (there does seem to be a debian bug to do so https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=868030 )
On Windows and Mac docker works under Linux VM. So, I think, that for your container under Windows started ARM Linux VM. But under native Linux used native architecture.
The "exec format error" confirms that you are not running your docker image on the correct architecture.
I had this error trying to run a x86 docker image on a Raspberry Pi 2 (Which works with an ARM architecture). I am pretty sure it might be the same error when you do it the other way round.
So, as Kulti said, Windows/MAC must have started an ARM Linux VM.
If you wish to work with ARM docker images on Linux, you may want to try running a linux docker VM manually. I think you can do it using "docker-machine" even on linux : Docker documentation for docker-machine. (Haven't done it myself so I am not sure)
Hope this helps.
Docker on Windows uses a Linux VM which has been configured such that it can run images of other architectures through Qemu user mode emulation. You can configure native linux in a similar way and it too will then run ARM images. There is a well written three part series that describes it all in detail
Main thing to take away from Part#1 is that any file on Linux is executed through an interpreter (even binary files). The choice of interpreter is configurable, through binfmt_misc, based on byte patterns at the beginning of file or filename extension etc.
Part#2 builds on Part#1 to show how to configure Linux kernel (installed on any architecture) to interpret ARM binaries using Qemu User Emulation.
Finally Part#3 shows how to apply the same trick this time to a linux setup in a docker container which means that linux docker container (which could be for any architecture) will be able to execute ARM binaries.
Important thing to note here is that there is nothing special about docker implementation or containerization that allows docker on Windows to be able to execute ARM binaries. Instead any Linux setup (whether on bare metal or in a container) can be configured to execute ARM binaries through Qemu's user mode emulation of an ARM cpu.
I know this post is old but I will post my solution here in case someone came here through Google.
This happen because your Docker host is not able to run images with AMR architecture. To be enable this in your Docker just run:
docker run --rm --privileged hypriot/qemu-register
More info you can find on this post.
You need the kernel configured for qemu's binfmt_misc module, and the container needs to have the static binaries used by qemu available inside the container filesystem.
You can load the files on the host with the hyperiot/qemu-register image, however I prefer the distribution vendor packages when available (ensures that I get patches when I update). For Debian, the imporant packages is qemu-user-static which you can install as root with:
apt-get update && apt-get install qemu-user-static
Ensure the kernel module is loaded (as root):
modprobe binfmt_misc
Then when running the container, you can mount the static qemu binaries into your container rather than packaging them inside your image, e.g. for the arm arch:
docker run -it --rm \
-v /usr/bin/qemu-arm-static:/usr/bin/qemu-arm-static:ro \
hypriot/rpi-node /bin/sh
Docker includes binfmt_misc in the embedded Linux VM's used on Docker for Desktop, and there appears to be some additional functionality to avoid the need to manually mount the static qemu files inside the container.
It is known that the docker is a virtualized technology based on Linux kernel, and Windows images can not be run on docker. So when I run docker daemon on centos6.5, does it matter starting a container run on the images of centos7?
No, it doesn't matter very much. The docker image provides the filesystem for your container, while your host os provides the kernel. The only way it could wind up mattering is if the process you are running requires some kernel feature that is not present in the kernel being run on your host system.
You can run docker images based off of all sorts of linux distros without issue. Alpine linux has become pretty popular recently, for example.