Does containers depends on to a specific host? - linux

I have an application which have kernel space code and it is compiled in linux kernel 2.6.32-431.el6.x86_64 (centos 6.5). Then it is installed as kernel module to run the application. When I tried to containerized my application with Docker (which is installed in Amazon Linux AWS EC2 instance), it complains about the higher kernel version is incompatible with the module. Is it I have to install Docker in a host which have kernel version as 2.6.32-431.el6.x86_64? If yes, does our containers depend on to a specific host machine?

Containers include the application and all of its dependencies, but share the kernel with other containers. They run as an isolated process in userspace on the host operating system.
https://www.docker.com/what-docker
Docker containers use host's kernel. You may install the same version of the kernel required in the container as host's kernel, but you will not be able to run CentOS6 compiled kernel on Amazon Linux.
Considering your specific application requirements, I would suggest using
"real" virtualization solution such as XEN or KVM that allows you to use your own kernel in a VM.

Related

Undrestanding docker: what is the dockerhub Ubuntu image?

I configured Windows Subsystem for Linux, installed a Microsoft packaged Ubuntu on Win 10, to get my hands on Docker using Linux. From what I understood, Docker does not need a guest OS, unlike VMWare - that's one of the main advantages.
I browsed the dockerhub and found an official Ubuntu image. What is it for, as there is no need of a guest OS?
Shared OS is probably the wrong term here, because many include the Linux distribution and filesystem as part of the OS. Containers run with a shared Linux kernel, but in isolated namespaces from the host and each other. One of those namespaces is the mount namespace, including your root filesystem. Therefore when you enter a container, the files in /bin and other directories are assembled from the image (plus volume mounts, and changes made within the container).
The Ubuntu docker image is an initial filesystem that includes a minimal Ubuntu environment you can use the create other images for running your containers. If you were to start a container without that, you wouldn't have anything, no /bin/sh, no apt, no libraries, and would have to create every binary and needed libraries to run commands inside the container first.

Are there any limitations regarding the age of a linux distribution which can be used to create a docker base-image?

Im wondering if its possible to use very old Linux Distribution like Debian GNU/Linux 3.1 (Sarge) and create a base-image of it to run legacy code not working under "younger" distros.
Only Thing i found about it was somebody successfully using Ubuntu Feisty: Run old Linux release in a Docker container?
Are there any known limitations?
Your host needs to have a minimal version of the Linux kernel, and that version is 3.10
See
Docker minimum kernel version 3.8.13 or 3.10
extract from the previous link
There's also a shell-script to check if your system has the required dependencies in place and to check which features are available;
https://github.com/docker/docker/blob/master/contrib/check-config.sh
So you can use this to check if you will be able to use docker on this host.
From
https://wiki.debian.org/DebianSarge?action=show&redirect=Sarge
I see
kernel : linux 2.4.27 and 2.6.8
So it may not work

Is there a way to share host (ubuntu) file system with guest (centos 7)?

I am trying to use virsh and domain xml to launch a Centos 7 guest from ubuntu 16.04 LTS host.
The "filesystem" node that i am using in domain xml is as below:
<filesystem type='mount' accessmode='passthrough'>
<driver type='path' wrpolicy='immediate'/>
<source dir='/opt/test'/>
<target dir='testlabel'/>
</filesystem>
With the above config, "testlabel" is not visible in the guest and hence i am not able to mount it. Is there anything that i am missing?
I tried to have 9p modules in guest but they don't seem to be available in centos 7.
I do not want to use network based file sharing like NFS or glusterfs either.
RHEL-7 (and thus CentOS-7) explicitly does not support the 9p filesystem. It is disabled in guest kernel builds and also disabled in QEMU builds for RHEL hosts. The reason is that 9p support in QEMU has been largely unmaintained upstream and the QEMU community doesn't have confidence its is security or performance.
If you want to share filesystem locations, pretty much your only choice is to use a traditional network filesystem, whether NFS, SAMBA, or something tunnelled like SSHFS.
Work is ongoing upstream to support a new technology called virtio-vsock, which will allow running NFS-over-vsock, bypassing the need for networking - think of it as akin to NFS over UNIX sockets. This is not ready for use yet though, so not possible for an Ubuntu/RHEL-7 pair.
Use for the guest (CentOS 7) the kernel from the CentOSPlus repository Wiki CentOSPlus. The CentOSPlus kernel has the 9p file system support build in. You can install the "kernel-plus" kernel with
yum --enablerepo=centosplus install kernel-plus
Start the guest with the "kernel-plus" kernel and
mount -t 9p -o trans=virtio {sharetarget} {mountpoint}
works. I use it this way on CentOS 7 guest systems.

Can run ARM/rpi images in Docker on Windows but not linux

I'm able to run the ARM images (eg. hypriot/rpi-node) in Docker on Windows (64bit), but in all linux x86/64 machines I've tried (Debian, CoreOS, Alpine etc) I get the following error - which makes sense to me but I dont get why it'd run in Docker on Windows then, and I wonder whether I'm missing some opportunity to use an x86 machine as a build server for ARM images (ie. the in google/aws cloud/azure). Any ideas how I might be able to?
docker run -ti hypriot/rpi-node ls
standard_init_linux.go:175: exec user process caused "exec format error"
Docker for windows (and docker for mac) both use a linux vm to host containers. However, the difference between the linux vm they use and your linux machines is the fact that their VM has a kernel system called binfmt_misc setup to call qemu whenever it encounters a binary for a foreign architecture (https://github.com/linuxkit/linuxkit/blob/1c552f7a9db7f0660d3c83362d241e54142323ca/pkg/binfmt/etc/binfmt.d/00_linuxkit.conf )
If you were to configure your linux machine appropriately, it could be used as a build server for ARM images. Google qemu-user-static for some ideas of how to set it up.
Note that the linuxkit vm uses the 'F' flag which doesn't seem to be standard when configuring a typical linux environment. Without it, you need to put the qemu binary inside the container. I'm not sure why it isn't standard practice to use 'F' in more places (there does seem to be a debian bug to do so https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=868030 )
On Windows and Mac docker works under Linux VM. So, I think, that for your container under Windows started ARM Linux VM. But under native Linux used native architecture.
The "exec format error" confirms that you are not running your docker image on the correct architecture.
I had this error trying to run a x86 docker image on a Raspberry Pi 2 (Which works with an ARM architecture). I am pretty sure it might be the same error when you do it the other way round.
So, as Kulti said, Windows/MAC must have started an ARM Linux VM.
If you wish to work with ARM docker images on Linux, you may want to try running a linux docker VM manually. I think you can do it using "docker-machine" even on linux : Docker documentation for docker-machine. (Haven't done it myself so I am not sure)
Hope this helps.
Docker on Windows uses a Linux VM which has been configured such that it can run images of other architectures through Qemu user mode emulation. You can configure native linux in a similar way and it too will then run ARM images. There is a well written three part series that describes it all in detail
Main thing to take away from Part#1 is that any file on Linux is executed through an interpreter (even binary files). The choice of interpreter is configurable, through binfmt_misc, based on byte patterns at the beginning of file or filename extension etc.
Part#2 builds on Part#1 to show how to configure Linux kernel (installed on any architecture) to interpret ARM binaries using Qemu User Emulation.
Finally Part#3 shows how to apply the same trick this time to a linux setup in a docker container which means that linux docker container (which could be for any architecture) will be able to execute ARM binaries.
Important thing to note here is that there is nothing special about docker implementation or containerization that allows docker on Windows to be able to execute ARM binaries. Instead any Linux setup (whether on bare metal or in a container) can be configured to execute ARM binaries through Qemu's user mode emulation of an ARM cpu.
I know this post is old but I will post my solution here in case someone came here through Google.
This happen because your Docker host is not able to run images with AMR architecture. To be enable this in your Docker just run:
docker run --rm --privileged hypriot/qemu-register
More info you can find on this post.
You need the kernel configured for qemu's binfmt_misc module, and the container needs to have the static binaries used by qemu available inside the container filesystem.
You can load the files on the host with the hyperiot/qemu-register image, however I prefer the distribution vendor packages when available (ensures that I get patches when I update). For Debian, the imporant packages is qemu-user-static which you can install as root with:
apt-get update && apt-get install qemu-user-static
Ensure the kernel module is loaded (as root):
modprobe binfmt_misc
Then when running the container, you can mount the static qemu binaries into your container rather than packaging them inside your image, e.g. for the arm arch:
docker run -it --rm \
-v /usr/bin/qemu-arm-static:/usr/bin/qemu-arm-static:ro \
hypriot/rpi-node /bin/sh
Docker includes binfmt_misc in the embedded Linux VM's used on Docker for Desktop, and there appears to be some additional functionality to avoid the need to manually mount the static qemu files inside the container.

one compatibility issue about docker

It is known that the docker is a virtualized technology based on Linux kernel, and Windows images can not be run on docker. So when I run docker daemon on centos6.5, does it matter starting a container run on the images of centos7?
No, it doesn't matter very much. The docker image provides the filesystem for your container, while your host os provides the kernel. The only way it could wind up mattering is if the process you are running requires some kernel feature that is not present in the kernel being run on your host system.
You can run docker images based off of all sorts of linux distros without issue. Alpine linux has become pretty popular recently, for example.

Resources