I release ARM binaries of my software, by running the compiler toolchain on an emulated ARM machine.
Linux can run foreign binaries by registering qemu-user-static in /proc/sys/fs/binfmt_misc/. This allows you to run an ARM32 or ARM64 Docker image on an x86_64 Docker host, as follows:
Preparation:
# Apply `binfmt_misc` changes on host OS
docker run --rm --privileged multiarch/qemu-user-static:register --reset
Dockerfile:
# Get x86_64 qemu-user-static binaries
FROM debian:buster
RUN apt-get update && apt-get install -qqy qemu-user-static
# Get cross-arch rootfs
FROM arm64v8/golang:latest
COPY --from=0 /usr/bin/qemu-aarch64-static /usr/bin/qemu-aarch64-static
This works great on Docker for Linux.
It also works great on Docker for Windows when using Linux Containers (MobyLinuxVM)
It doesn't work when using Docker for Windows when using Windows Containers (LCOW / hcsdiag mode). I want to use this mode because it can run both Linux and Windows containers. But it's not possible to modify the binfmt_misc file via the --privileged flag:
C:\Program Files\Docker\Docker\Resources\bin\docker.exe: Error response from daemon: Windows does not support privileged mode.
See 'C:\Program Files\Docker\Docker\Resources\bin\docker.exe run --help'.
Current (1803-era) versions of Hyper-V HCS run a real Linux kernel, not a WSL one. I guess it should be possible to modify the host's binfmt_misc directory.
How is it possible to run a Linux/ARM container image on a Windows/x86_64 Docker host running LCOW?
Is it possible to modify the Linux host image used by LCOW?
Is there any other way to get a unified docker daemon that is capable of running Windows/x86_64, Linux/x86_64 and Linux/ARM Docker images?
Related
I'm running
Ubuntu 20.04.5 LTS
Docker version 23.0.1, build a5ee5b1
Running the command
docker build -t some:other Dockerfile
Produces the following output:
unknown shorthand flag: 't' in -t
And
docker build
The following:
docker: 'buildx' is not a docker command.
I installed Docker as recommended from the repo: instructions
Other plugins don't work either (docker compose is not recognized either). Even then, docker info shows
buildx: Docker Buildx (Docker Inc.)
Version: v0.10.2
Path: /home/jpartanen/.docker/cli-plugins/docker-buildx
compose: Docker Compose (Docker Inc.)
Version: v2.16.0
Path: /home/jpartanen/.docker/cli-plugins/docker-compose
scan: Docker Scan (Docker Inc.)
Docker runs without sudo with the help of the docker user group, as explained in linux-postinstall. I want to run plugins without sudo as well.
I've reinstalled Docker and rebooted the machine without any change.
What could be the problem?
Make the plugins runnable for docker by creating a link:
ln -s /usr/libexec/docker/cli-plugins/ ~/.docker/cli-plugins
The command not being recognized by Docker is extra confusing because of the mismatch in commands, build vs buildx. This is because Docker Engine 23.0 set Buildx and BuildKit as the default builder on Linux. docker build is aliased to docker buildx build.
As for running without sudo, the problem is possibly caused by the plugins being installed in the wrong place. On my machine, running the command
sudo apt-get install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin
installs the plugins in /usr/libexec/docker/cli-plugins/, whereas as laid out here, the plugins are usable from $HOME/.docker/cli-plugins (without sudoing).
A somewhat robust solution is to create a link as laid out above.
I am trying to install linux header for my ubuntu 18.04 docker image (ubuntu:18.04). Usually I will do sudo apt-get install linux-headers-$(uname -r) in my VM to get the current linux header packages.
But the docker image return the following when I run uname -r
root#0c4e24cca819:/# uname -r
4.19.76-linuxkit
Just wonder which linux header image I should use for ubuntu:18.04 docker image?
Docker by definition runs your current kernel. If you are on a machine whose kernel has not been packaged for Ubuntu then there is no package you can install to get its headers.
Looks like you're on a Mac, so definitely that is the case here. Perhaps you could ask the Docker for Mac maintainers to provide headers for some popular platforms for their kernel, but I suspect they don't want to take on that responsibility.
As a workaround, maybe run Docker inside Linux on e.g. Virtualbox.
This question already has answers here:
How is Docker different from a virtual machine?
(23 answers)
Closed 5 years ago.
I recently started to learn Docker, and know it creates and runs Ubuntu within a container with just a simple command.
docker run -i -t ubuntu:14.04 /bin/bash
I also know that docker-machine uses VirtualBox to create Linux OS in a very handy way.
So what's the difference between them?
So docker run -i -t ubuntu:14.04 /bin/bash uses docker engine to create containers(ubuntu container in this case) and will use your Host OS to manage them. On the other hand docker machine will use virtualBox and create VMs(Linux) which will serve as docker hosts running docker engine on them. There are a few links you can refer to :
https://dougwells.gitbooks.io/docker-notes/content/what_is_docker/what_is_difference_between_docker-machine_and_dock.html
https://docs.docker.com/machine/overview/
https://docs.docker.com/engine/
The first command using docker run is to start a new container. Docker containers can be run anywhere - on your local machine, within a VM (Virtualbox, VMWare etc), in an instance in the cloud, on bare metal or even on your smartphone. All this requires is to have docker installed and running as a daemon / service
docker-machine is a tool used to mimic running docker containers locally using a VM. This is only because earlier versions of docker were not available on MacOS & Windows natively. As such a Linux OS is made available insider a virtual machine with docker installed. On this VM it was possible to run docker commands and docker containers as though it was running docker natively.
You should check out Docker for Mac and Docker for Windows if these are compatible with your setup.
To develop driver program, we need /lib/modules//build directory. But I found under docker image of centos, even after I
yum install kernel-devel
There's still no such a directory with all its contents. Question:
(1) how to make it possible to develop driver in a docker linux environment?
(2) is it possible to load this developed module?
Docker is not virtual machine.
Ubuntu with docker is not real ubuntu.
If you want to develop with ubuntu, you should use virtualbox or vmware.
Check this link for more information
Docker uses the kernel of the host machine.
After reading this page, I almost gave up building a kernel module in Docker so I'm adding this answer hoping it helps somebody. See also what-is-the-difference-between-kernel-drivers-and-kernel-modules
You can build Kernel modules in Docker as long as the Kernel source required for the build is available inside Docker. Lets say you want to build against the latest kernel source available in your yum repos, you could install the kernel source using yum install kernel-devel. The source will be in /usr/src/kernels/<version> directory. You could install specific version of kernel-devel from your repo if that is what you want.
Then build the module using $ make -C <path_to_kernel_src> M=$PWD where the path to the kernel source would be /usr/src/kernels/<version>.
Read - Kernel Build System ยป Building External Modules
Docker container uses the kernel of the host machine so if you want to build against the running kernel, i.e., the kernel of the Docker host machine, you could try running the container in privileged mode and mounting the modules directory. docker run --name container_name --privileged --cap-add=ALL -v /dev:/dev -v /lib/modules:/lib/modules image_id See this
You should not load the modules on a kernel that is not the same as the one the module was built for. You could force install it but that is highly discouraged. Remember your running kernel, i.e., the Docker host kernel, is the kernel of the Docker container irrespective of what kernel-devel version you installed.
To see the kernel the module was built for (or built using), run modinfo <module> and look for vermagic value.
Dynamic Kernel Module Support is also worth a read.
I'm trying to expose an Arduino that's plugged into my mac to a linux instance I'm running in Docker for Mac (no vm).
The Arduino exposes itself as /dev/tty.usbserialXXX. I'm using the node docker image which is based upon ubuntu.
The command I'm running is
$ docker run --rm -it -v `pwd`:/app --device /dev/tty.usbmodem1421 node bash
docker: Error response from daemon: linux runtime spec devices: error gathering device information while adding custom device "/dev/tty.usbmodem1421": lstat /dev/tty.usbmodem1421: no such file or directory.
If I try to use --privileged
$ docker run --rm -it -v `pwd`:/app --device /dev/tty.usbmodem1421 --privileged node bash
root#8f18fdbcf64d:/# ls /dev/tty.*
ls: cannot access /dev/tty.*: No such file or directory
Nothing is exposed!
I'm using this to expose serial devices to test serial drivers in linux.
The problem here is largely that you're not running Docker on your mac. You're running a Linux VM on your Mac, inside which you're running Docker. This means that it's easy to expose the /dev tree inside the Linux VM to Docker, but less easy to expose devices from your Mac, absent some kind of support from the hypervisor.
Using the legacy "Docker Toolbox" for Mac, which is built around VirtualBox, it ought to be possible to assign a USB device to the VirtualBox host running Docker (which would in turn allow you to expose it to your Docker containers).
This GitHub issue talks about this particular situation and has links to helpful documentation.
I don't know if this sort of feature is currently available with the hypervisor used in the newer "Docker for Mac" package.
The Arduino devise that is listed at /dev/tty.usbserialXXX could be a symlink to the device, and not the actual path. To read the symlink path try using
docker run --rm -it -v `pwd`:/app --device=/dev/$(readlink /dev/tty.usbmodem1421) node bash
There was an issue open for this some time back. Do check if it solves your problem