When I first heard that Microsoft was working to run docker containers it didn't make sense.
For a while it seemed that Docker was Linux-centric, with its dependency on Linux Containers.
Now it seems Docker has switched from LXC to an implementation of the Open Containers Format (OCF) spec in runc.
My question is: Does the OCF spec mean that Docker is no longer Linux centric? (ie is that how this will work? Does that mean there exists the theoretical capability to do this on OSX as well?)
There are a few points of interest here.
Containers can only be supported natively on platforms that have support for OS virtualization. OSX (so far) does not have such a capability. So it cannot support containers natively. You have to use a VM.
A standardized container format does not mean that the same container will be able to run on different platforms. The container and the host necessarily have to run on the same kernel. So a particular container can only run on a compatible platform.
What the standardized container format specification does is to enable richer container ecosystem technology from varied sources, all able to interwork because of the standard container format. This technology still has to be implemented for each different host platform.
Docker's adoption of OCF does not necessarily mean that it will automatically start targeting platforms other than Linux. It just means that the container format it will use on Linux will be the OCF instead of it's own proprietary format.
+1 to Ziffusion. You might want to reword Item 1), but basically you are correct on all four points.
To answer the OP's question: I do not believe OCF "deprecates" Linux. On the contrary, I believe it better supports Linux AND, AT THE SAME TIME, opens Docker functionality to better support other OS's, too.
Specifically:
https://www.opencontainers.org/faq
In the past two years, there has been rapid growth in both interest in
and usage of container-based solutions. Almost all major IT vendors
and cloud providers have announced container-based solutions, and
there has been a proliferation of start-ups founded in this area as
well. While the proliferation of ideas in this space is welcome, the
promise of containers as a source of application portability requires
the establishment of certain standards around format and runtime.
While the rapid growth of the Docker project has served to make the
Docker image format a de facto standard for many purposes, there is
widespread interest in a single, open container specification, which
is:
a) not bound to higher level constructs such as a particular client or
orchestration stack,
b) not tightly associated with any particular commercial vendor or
project, and
c) portable across a wide variety of operating systems, hardware, CPU
architectures, public clouds, etc.
The FAQ further states:
What are the values guiding the specification?
Composable. All tools for downloading, installing, and running containers should be well integrated, but independent and composable.
Container formats and runtime should not be bound to clients, to
higher level frameworks, etc.
Portable: The runtime standard should be usable across different hardware, operating systems, and cloud environments.
Open. The format and runtime should be well-specified and developed by a community. We want independent implementations of tools to be
able to run the same container consistently. ...
The Open Container Initiative is a working towards making a container format and a runtime that can run on many platforms, although a lot of the concepts and requirements will be based on the linux foundation they have been built from. An OCF container still specifies a platform so don't expect to be able to execute a Windows container on a Linux host. But expect to be able to manage Linux and Windows and "Y" containers in the same manner and ecosystem.
Docker moved away from LXC a while ago, to using libcontainer which is still linux centric. runC is the next runtime that is already able to run current docker containers on Linux, but aims to support the Open Container Format spec on many platforms.
The goal of runC is to make standard containers available everywhere
Linux, obviously, has been building up the OS features to support containers over the last 10 years. Microsoft has included a lot of the OS components in Windows 10 to run containers natively and has thrown support behind docker. So expect runC to be running on Windows soon.
BSD does support a lot of the functionality via it's Jails setup but never matured as much as the Linux space so I believe additional OS support will be required for it, or OSX to be able to run an OCF container natively. Although recent FreeBSD 11 does allow you to run Docker via it's 64bit Linux compatibility layer so I'm guessing runC would be close to doing the same, at some possible performance cost.
Related
I want to know about LXC and came across this site: https://linuxcontainers.org/lxc/introduction/; in this site, it talks about LXC, LXD, among others.
I am a bit confused, I am under the impression that LXC is a Linux kernel feature, so it should be present in Kernel itself. However, looking at the above site viz: https://linuxcontainers.org/lxc/introduction/, is this same when we say LXC (the kernel feature)? Or is LXC provided to the Linux kernel by https://linuxcontainers.org/lxc/introduction/?
How can I understand this subtle difference?
Most of the core features needed to operate Linux in containers are built into the kernel -- namespaces, control groups, virtual roots, etc. However, to assemble a usable container platform from these features requires a considerable amount of infrastructure. We need to manage container storage, create network links between containers, control per-container resource usage, etc. User-space programs can, and are, used to provide this infrastructure, and the tooling that goes with it.
I have written a series of articles on building a container from scratch that explains some of these issues:
http://kevinboone.me/containerfromscratch.html
It's possible in principle to build and connect containers using nothing but the features built into the kernel, and a bunch of shell scripts. Tools like LXC, Docker, and Podman all use the same kernel features (so far as I know), but they manipulate these features in different ways.
on Azure there are Virtual Machines pre-configured for Data Science activities. There are images on Windows and on Linux - CentOS and Ubuntu. My question is - are there any important differences between image on CentOS vs image on Ubuntu? Of course - despite the OS itself ;)
From what I can see in specifications, there are mostly the same, but maybe there are some bits and pieces that have an important impact on using one of them.
The theory behind DSVM is that there is a build with all of the tools and drivers you need (to run on Azure Nvidia instances)that will work regardless of OS. So the difference is purely an OS one, simply because some organisations have infrastructure geared towards Ubuntu, others towards Redhat/Centos (and some even do Windows!)
The DSVM is an image concept that starts above the OS, so even the Windows editions will have basically the same toolset. if you look here there is a rundown of what the goals of DSVM are.
I don't have any knowledge about Linux/Unix environment. So for some understanding I have put this question in front of all the developers and Unix/Linux technical people.
By applications I target IDE's used by developers, especially:
Visual Studio
IntelliJ Idea Community Version
PyCharm Community Version
Eclipse
And other peripheral apps used by developers, gamer and network engineers
To some experienced Linux users, my question might be baseless. But consider me a beginner with Linux. Thank You in advance.
The term "application" is a very vague, fuzzy one these days. It does not describe some artifact with a certain internal structure and way how to invoke it but merely the general fact that it is something that can be "used".
Different types of applications are in wide spread use on today's systems, that is why I asked for a clarification of your usage of the term "application" in the comments. The examples you then gave are diverse though they appear comparable at first sight.
A correct and general answer to your question would be:
One application can be used in different Linux based environments if that that environment provides the necessary preconditions to do so.
So the core of your question is kind of shifted towards whether different flavors of Linux based systems offer similar execution environments. Actually it makes sense to extend that question to operating systems in general, the difference between today's alternative is relatively small from an applications point of view.
A more detailed answer will have to differ between the different types of applications or better between their different preconditions. Those can be derived from the architectural platform the application is build on The following is a bit simplified, but should express what the situation actually is:
Take for example the IntelliJ IDEA and the Eclipse IDE. Both are java based IDEs. Java can be seen as a kind of abstraction layer that offers a very similar execution environment on different systems. Therefore both IDEs typically can be used on all systems offering such a "java runtime environment", though differences in behavior will exist where necessary. Those differences are either programmed into the IDEs or origin from the fact that certain components (for example file selection dialogs) are not actually part of the application, but the chosen platform. Naturally they may look and behave different on different platforms.
There is however another aspect that is important here especially when regarding Linux based environments: the diversity of what is today referred to as "Linux". Unlike with pure operating systems like MS-Windows or Apple's MaxOSX which both follow a centralized and restrictively controlled approach we find differences in various Linux flavors that far extend things like component versions and that availability. Freedom of choice allows for flexibility, but also holds a slightly more complex reality in the outcome. Here that means different Linux flavors do indeed offer different environments:
different hardware architecture, unlike MS-Windows and MacOSX the system can not only be used on intel x86 based hardware, but on a variety of maybe 120 completely different hardware architectures.
the graphical user interface (GUI or desktop environment, so windows, panels, buttons, ...) is not an integral part of the operating system in the Linux (Unix) world, but a separate add on. That means you can chose.
the amount of base components available in installations of different Linux flavors differs vastly. For example there are "full fledged, fat desktop flavors" like openSUSE, RedHat or Ubuntu, but there are also minimalistic variants like Raspbian, Damn Small Linux, Puppy, Scientific Linux, distributions specialized in certain tasks like firewalling or even variants tailored for embedded devices like washing machines or moon rockets. Obviously they offer a different environment for applications. They only share the same operating system core, the "kernel", which is what the name "Linux" actually only refers to.
...
However, given all that diversity with it's positive and negative aspects, the Linux community has always been extremely clever and active and crafted solutions to handle that specific situations. That is why all modern desktop targeting distributions come with a mighty software management system these days. That controls dependencies between software packages and makes sure that those dependencies are met or resolved when attempting to install some package, like for example an addition IDE as in your example. So the system would take care to install a working java environment if you attempt to install one of the two java based IDE's mentioned above. That mechanism only works however if the package to be installed is correctly prepared for the distribution. That is where the usage of Linux based systems differs dramatically from other operating systems: here comes the introduction of repositories, how to search, select and install available and usable software packages for a system and and and, all a bit to wide a field to be covered here. Basically: if the producer of a package does his homework (or someone else does for him) and correctly "packages" the product, then the dependencies are correctly resolved. If however the producer only dumps the raw bunch of files, maybe as a ZIP archive and insists on a "wild" installation as typically done for example on MS-Windows based systems, so writing files into the local file system by handing administrative rights to some bundled "installer" script that can do whatever it wants (including breaking and ruining or corrupting) the system is executed on, then the systems software management is bypassed and often the outcome is "broken".
However no sane Linux user or administrator would follow such a path and install such a software. That would show a complete lack of understanding how the own system actually works and the consequent abandonment of all the advantages and comfort offered.
To make a complex story simple:
An "application" usually can be used in different Linux based environments if that application is packaged in a suitable way and the requirements like runtime environment posed by the application are offered by that system.
I hope that shed some light on a non trivial situation ;-)
Have fun!
I know base images are minimal operating systems with limited kernel features. If I want to use Ubuntu base image for my applications, how can I know if the kernel features included are enough for my applications? Are there any commands to show the kernel features included in the base images? Thanks a lot!!
This a common misconception regarding containerization vs virtualization.
A Docker image is just a packaged file structure with some additional metadata. A Docker container is simply an isolated process on the host (see cgroups) using the image as its root file system (see chroot). This is what makes containers so lightweight as compared to running a full VM.
To answer your question, a Docker container can only rely on the kernel features of the host system it is running on.
If your application requires uncommon kernel features Docker might not be the best solution, though you could easily add a check for those features as part of the container startup to inform the user with further instructions.
Can docker images created with one version of linux (say ubuntu) be run without problem on ANY other version of Linux? i.e. CentOS?
So far I have not had problems in my testing but I am a new to this.
I'd like to know if there are any specific use cases that might make a Docker container non-functional on a host node due to the host's Linux version.
Thank you
Can docker images created with one version of linux (say ubuntu) be run without problem on ANY other version of Linux? i.e. CentOS?
Older kernels may not have the necessary namespace support for Docker to operate correctly, although at this point Docker seems to run fine on the current releases of most common distributions.
Obviously the host must be the appropriate architecture for whatever you're running in the container. E.g., you can't run an ARM container on an x86_64 host.
If you are running tools that are tighly coupled to a particular kernel version, you may run into problems if your host kernel is substantially newer or older than what the tools expect. E.g., you have a tool that wants to use ipset, but ipset support is not available in your host kernel.
You're only likely to have an issue if you have code that relies on a kernel feature that isn't present on another host. This is certainly possible, but unusual in everyday usage.