Which commands of the defined Linux Distribution are available in a Docker container? - linux

I'm new to docker and understand that the linux kernel is shared between the host-os and the containers. But I don't really understand how deep docker emulates a specific linux-distribution. Lets say we have a simple docker file like this:
FROM ubuntu:16.10
RUN apt-get install nginx
It will give me a docker container with nginx installed in an Ubuntu 16.10 environment. So I should be able to use apt-get as default package manager of Ubuntu. But how deep is this? Can I assume that typical commands of those distribution like lsb_release are emulated like in a full VM with Ubuntu 16.10 installed?
The reason behind my question is that linux distributions are different. I need to know which commands are avaliable, for example when I run a container with Ubuntu 16.10 like the one above on a host which a different distribution installed (like Red Hat, CentOS etc).
A Ubuntu image in Docker is about 150 MB. So I think there are not all tools included like in a real installation. But how can I know on which I can desert that they're there.

Base OS images for Docker are deliberately stripped down, and for Ubuntu they are removing more commands with each new release. The image is meant as the base for a dedicated application to run, you wouldn't typically connect to the container and run commands inside it, and a smaller image is easier to move around and has a smaller attack vector.
There isn't a list of commands in each image version that I know of, you'll only know by building your image. But when images are tagged you can assume a future minor update will not break downstream images - a good argument for explicitly specifying a tag in your Dockerfile.
E.g, this Dockerfile builds correctly:
FROM ubuntu:trusty
RUN ping -c 1 127.0.0.1
This one fails:
FROM ubuntu:xenial
RUN ping -c 1 127.0.0.1
That's because ping was removed from the image for the xenial release. If you just used FROM ubuntu then the same Dockerfile would have built correctly when trusty was the latest tag and then failed when it was replaced by xenial.

A container is presenting you with the same software environment as the non-containerized distribution. It may not have (in fact, probably does not have) all the same packages installed by default, but you can install whatever you need using the appropriate package manager. The availability of software in the container has nothing to do with the distribution running on your host (the Ubuntu image will be the same regardless of whether your are running Docker under CentOS, Fedora, Ubuntu, Arch, etc).
If you require certain commands to be available, just ensure that they are installed in your Dockerfile.
One of the few things that works differently inside a container is that there is typically no service management process running (like init or systemd or whatever), so you cannot start services the same way you can on the host without a little bit of work.

Related

Undrestanding docker: what is the dockerhub Ubuntu image?

I configured Windows Subsystem for Linux, installed a Microsoft packaged Ubuntu on Win 10, to get my hands on Docker using Linux. From what I understood, Docker does not need a guest OS, unlike VMWare - that's one of the main advantages.
I browsed the dockerhub and found an official Ubuntu image. What is it for, as there is no need of a guest OS?
Shared OS is probably the wrong term here, because many include the Linux distribution and filesystem as part of the OS. Containers run with a shared Linux kernel, but in isolated namespaces from the host and each other. One of those namespaces is the mount namespace, including your root filesystem. Therefore when you enter a container, the files in /bin and other directories are assembled from the image (plus volume mounts, and changes made within the container).
The Ubuntu docker image is an initial filesystem that includes a minimal Ubuntu environment you can use the create other images for running your containers. If you were to start a container without that, you wouldn't have anything, no /bin/sh, no apt, no libraries, and would have to create every binary and needed libraries to run commands inside the container first.

Can run ARM/rpi images in Docker on Windows but not linux

I'm able to run the ARM images (eg. hypriot/rpi-node) in Docker on Windows (64bit), but in all linux x86/64 machines I've tried (Debian, CoreOS, Alpine etc) I get the following error - which makes sense to me but I dont get why it'd run in Docker on Windows then, and I wonder whether I'm missing some opportunity to use an x86 machine as a build server for ARM images (ie. the in google/aws cloud/azure). Any ideas how I might be able to?
docker run -ti hypriot/rpi-node ls
standard_init_linux.go:175: exec user process caused "exec format error"
Docker for windows (and docker for mac) both use a linux vm to host containers. However, the difference between the linux vm they use and your linux machines is the fact that their VM has a kernel system called binfmt_misc setup to call qemu whenever it encounters a binary for a foreign architecture (https://github.com/linuxkit/linuxkit/blob/1c552f7a9db7f0660d3c83362d241e54142323ca/pkg/binfmt/etc/binfmt.d/00_linuxkit.conf )
If you were to configure your linux machine appropriately, it could be used as a build server for ARM images. Google qemu-user-static for some ideas of how to set it up.
Note that the linuxkit vm uses the 'F' flag which doesn't seem to be standard when configuring a typical linux environment. Without it, you need to put the qemu binary inside the container. I'm not sure why it isn't standard practice to use 'F' in more places (there does seem to be a debian bug to do so https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=868030 )
On Windows and Mac docker works under Linux VM. So, I think, that for your container under Windows started ARM Linux VM. But under native Linux used native architecture.
The "exec format error" confirms that you are not running your docker image on the correct architecture.
I had this error trying to run a x86 docker image on a Raspberry Pi 2 (Which works with an ARM architecture). I am pretty sure it might be the same error when you do it the other way round.
So, as Kulti said, Windows/MAC must have started an ARM Linux VM.
If you wish to work with ARM docker images on Linux, you may want to try running a linux docker VM manually. I think you can do it using "docker-machine" even on linux : Docker documentation for docker-machine. (Haven't done it myself so I am not sure)
Hope this helps.
Docker on Windows uses a Linux VM which has been configured such that it can run images of other architectures through Qemu user mode emulation. You can configure native linux in a similar way and it too will then run ARM images. There is a well written three part series that describes it all in detail
Main thing to take away from Part#1 is that any file on Linux is executed through an interpreter (even binary files). The choice of interpreter is configurable, through binfmt_misc, based on byte patterns at the beginning of file or filename extension etc.
Part#2 builds on Part#1 to show how to configure Linux kernel (installed on any architecture) to interpret ARM binaries using Qemu User Emulation.
Finally Part#3 shows how to apply the same trick this time to a linux setup in a docker container which means that linux docker container (which could be for any architecture) will be able to execute ARM binaries.
Important thing to note here is that there is nothing special about docker implementation or containerization that allows docker on Windows to be able to execute ARM binaries. Instead any Linux setup (whether on bare metal or in a container) can be configured to execute ARM binaries through Qemu's user mode emulation of an ARM cpu.
I know this post is old but I will post my solution here in case someone came here through Google.
This happen because your Docker host is not able to run images with AMR architecture. To be enable this in your Docker just run:
docker run --rm --privileged hypriot/qemu-register
More info you can find on this post.
You need the kernel configured for qemu's binfmt_misc module, and the container needs to have the static binaries used by qemu available inside the container filesystem.
You can load the files on the host with the hyperiot/qemu-register image, however I prefer the distribution vendor packages when available (ensures that I get patches when I update). For Debian, the imporant packages is qemu-user-static which you can install as root with:
apt-get update && apt-get install qemu-user-static
Ensure the kernel module is loaded (as root):
modprobe binfmt_misc
Then when running the container, you can mount the static qemu binaries into your container rather than packaging them inside your image, e.g. for the arm arch:
docker run -it --rm \
-v /usr/bin/qemu-arm-static:/usr/bin/qemu-arm-static:ro \
hypriot/rpi-node /bin/sh
Docker includes binfmt_misc in the embedded Linux VM's used on Docker for Desktop, and there appears to be some additional functionality to avoid the need to manually mount the static qemu files inside the container.

Linux dev environment in osx (docker as mv or any other)

I'd love to hear from you some advice on setting up what I'm looking for.
I'm using OSX and I need to develop some code on a Linux machine, the thing is that I was looking for some VM alternative since it takes too much battery power.
The first thing I come across with was a docker container. I know It is not what it was designed for, but I thought it might work anyway. So I tried running a container as
docker run -i -t ubuntu /bin/bash
and it worked well. However all the changes I make are gone and I can't fins a way to solve it. I also tried
docker run -i -v /Users/JaimehRubiks/test:/home/Jaime -t ubuntu /bin/bash
and all files in there are saved (also very interesting because I can share my files with host), but it's kind of boring having to commit to the docker image if I change anything in the config files of my ubuntu.
What I'm looking for is just a simple way to run linux in my mac, and then access to it somehow, like I did in docker or via SSH.
Docker currently does not run natively on osx as Docker relies on the Linux kernel for its isolation features. In fact, the Docker Toolbox uses a Virtual Box virtual machine running the boot2docker Linux distro to run the Docker daemon on osx. See the official documentation on Mac osx installation.
The boot2docker linux image is quite light weight, but I'm not sure you will get much benefit from running Docker on osx for Linux development over simply running a full Virtualbox machine with Ubuntu (or other distro). If you want to run a virtual machine vagrant is a good tool to help you set that up. It lets you easily pull down images from an image repo, setup the image, and ssh into it. It also makes host -> guest-machine folder sharing and port forwarding quite simple.
but it's kind of boring having to commit to the docker image if I change anything in the config files of my ubuntu.
You don't have to docker commit anything: any file change make on the host (/Users/JaimehRubiks/test) will be visible in the container (/home/Jaime)
what about using vagrant to run Ubuntu or CentOS? you can access the system via command vagrant ssh and configure it with configuration file and share it like using docker.

one compatibility issue about docker

It is known that the docker is a virtualized technology based on Linux kernel, and Windows images can not be run on docker. So when I run docker daemon on centos6.5, does it matter starting a container run on the images of centos7?
No, it doesn't matter very much. The docker image provides the filesystem for your container, while your host os provides the kernel. The only way it could wind up mattering is if the process you are running requires some kernel feature that is not present in the kernel being run on your host system.
You can run docker images based off of all sorts of linux distros without issue. Alpine linux has become pretty popular recently, for example.

How to remove/install a docker image on an unconfigured Docker for centos 7

Using Centos 6.6 and 7 and deciding to move to centos 7 as there are some issues using docker with centos 6.6 (reboot issues for me) and i'm trying to pull the current centos image from docker. (should just be docker pull centos)
However because i already had a docker image of centos installed on the 6.6 virtual machine, i thought it conflicts with the one im trying to pull on the centos 7. It states that the image (f1b something) is already being used on the system and is causing the download to not go through. Simply going over to the centos 6.6 and trying to remove the images (which would be labeled as none by the way, thus you have to do docker images -a),even with force, does nothing. The only solution so far to this is to do a full removal of docker and its dependencies, and reinstall it which should come package free.
Of course this is not the solution i want. One of two things can happen. Either a way to make the two of them to coexist, or a way to remove the current one without removing any other current images. Or if I am not getting this right, take an entirely different approach.
EDIT+1: Ok heres the actual error im receiving when doing the the docker pull...
f1b...: download complete
f1b...:error downloading dependant layers
c85...:Downloading [>
7322...: Error pulling image (latest) from docker.io/centos, endpoint :https://registry-1.docker.io/v1,Dr
7322...:Error pulling image (latest) from docker.io/centos, Driver devicemapper failed to create image rootfs
FATA[0012] Error pulling image (latest) from docker.io/centos, Driver device mapper failed to create image rootfs f1b...:error running DeviceCreate (createSnapDevice) dm_task_run failed
And looking over the problem more im not so sure if its because of the centos 6.6 like i had initially thought despite sharing the same ID's.
EDIT +2: Stranger still is that the fatal error codes keeps changing (im assuming those are FATA[0012]?)
http://docker-sean.readthedocs.org/en/latest/chapter1.html
Theres a config file that needs to be changed for centos 7 docker users which happened to be applying the following change
OPTIONS='-g /docker/data -p /var/run/docker.pid'
in the vim/vi file of /etc/sysconfig/docker.
I swear docker is going to be the death of me...
EDIT +1: Ok lets remap the solution to the following starting from a new centos 7 machine...
yum install docker
service docker start
docker pull centos
ERROR
systemctl enable docker.service
ERROR?
sudo systemctl enable docker.service
systemctl start docker.service
ERROR?
yum remove docker
yum install epel-release.noarch
yum install docker-io
vim /etc/sysconfig/docker
OPTIONS='-g /docker/data -p /var/run/docker.pid'
service docker restart
docker pull centos
and thats how i got docker to work on the new VM if i mapped it correctly.
Also one of the commands i might have used was a thin_check. Somebody used it to verify docker in this link
EDIT +2:
Oh wow, this would explain even better whats happening here. See, the docker server can be installed straight out of the box with centos 7, however the daemon must still be installed from epel. As a reminder, the daemon is the item that actually runs the docker service. The server just allows docker to connect to the internet and view its repositories. Link is right here.

Resources