Currently learning docker and containerization, I get a little confused by the term of "moby runtime".
For my unterstanding, the whole docker has been split up in several libraries / tools / components allowing developers to build their own version of docker using the moby runtime.
Is this assumption correct?
What exactly is the relationship between moby runtime and e.g. the docker for desktop I download on my windows machine if I use the official docker page?
Why does e.g. Microsoft use the moby runtime to run some services like IoT Edge instead of the official docker build? Do they use their customized version of docker?
Yes, I think your unstanding is correct.
From official web site:
Moby is an open framework created by Docker to assemble specialized container systems without reinventing the wheel. It provides a “lego set” of dozens of standard components and a framework for assembling them into custom platforms. At the core of Moby is a framework to assemble specialized container systems which provides: Components, Tools, Assemblies.
It also said:
Moby IS RECOMMENDED for anyone who wants to assemble a container-based system: Hackers who want to customize or patch their Docker build.
And next digram may make you even clear:
From this you can see, you could start your own project just like Docker CE, Docker EE based on moby project. And here is a good article I think explain it clearly. Also this from official guys response for some relationship.
Moby is a bit of an overused name from Docker. In addition to being the name of one of their mascots (Moby is the blue whale you often see in the logos), Moby is:
An upstream open source project that Docker has given to the community. This gives separation from the closed source portions of Docker and the parts with Docker's trademark attached. You can see these projects in their Github repositories. You can think about the Moby Project the same way you think of Fedora as the upstream for RedHat, Docker does most of their development in the Moby Project repos and packages specific releases from there with the Docker name that you see as Docker CE and Docker EE. Some projects may live here forever, but Docker also strives to move these further upstream to be managed by external organizations, e.g. containerd and notary have both been transitioned to the Linux Foundation.
It is the repository name that was formerly docker/docker, now moved to moby/moby. This is the core of the docker engine.
It is a virtual machine that is packaged using LinuxKit. This VM is a minimal environment to run docker containers, and well suited for running on desktop and embedded environments where you don't want to manage the VM itself.
The latter is most likely what you are thinking of by "Moby Runtime". A VM is needed to run Linux containers on a Windows or Mac environment (docker containers depend on a lot of kernel functionality that would not be easy to emulate). You can even see examples for building similar VM's in the LinuxKit examples. Inside of that VM is the same Docker CE engine that is installed natively on Linux host. And the VM itself is built and maintained by Docker.
Related
I currently use Ansible to manage and deploy a fleet of servers.
I wish to start using Docker for some applications and would like to build a Docker image using the same scripts we use to configure on non Dockerized hosts.
For example we have an Ansible role that builds Nginx with 3rd party modules, would like to use the same role to build a Docker image with the custom Nginx.
Any ideas how I would get this done?
There is the "Ansible Container" project, https://www.ansible.com/integrations/containers/ansible-container. That page points also to the github repo.
It is not clear how well maintained it is, but their reasoning and approach makes sense.
Consider that you might have some adjustments to do regarding two aspects:
a container should do only one thing (microservice)
how to pass configuration to the container at runtime (Docker has some guidelines, such as environmental variables if possible or mounting a volume with the configuration files)
That's a perfect example where the docker-systemctl-replacement script should be used.
It has been developed to allow ansible scripts to target both virtual machines and docker containers. It had been developed when distros did switch to systemd which was hard to enable for containers. When overwriting /usr/bin/systemctl then the docker container will then look good enough for ansible that all the old scripts will continue to run, installing rpm/deb, and having 'service:'s started and enabled.
I am an IT Supervisor head and have very little development background so I apologize for this naive question.
Currently, we are using Weblogic, running in Linux VMs, created by Oracle VM (OVM) to host our application for production.
The development environment also uses the same configuration.
Our developers are suggesting we use docker in the development environment and utilize DevOps to increase the agility of development.
This sounds like a good idea to me, but I still want our production to run on the same configuration running today (Weblogic in Linux VMs over Oracle VM Hypervisor); I do not want to use docker for production.
I have been searching to find out if that is possible with no luck.
I would really appreciate it if you can help.
I have three questions:
Is that possible?
Is that a normal practice to run docker for development only while using traditional nondocker for production?
If it is possible, what are the best ways to achieve that?
Thank You
Docker is linux distro-agnostic. Java development is JEE container-agnostic (if you follow the Java official specs defined in the JSRs).
So, these are two reasons why you should have the same behaviour between your developper environment and your production environment. Of course, a pre-production environment should be welcome to be sure this is true. And do not avoid looking at memory and performances issues, before doing that. Moreover, depending on the reason you are using Weblogic, ask yourself about which JVM and JEE container you would run in your docker containers.
is that possible ?
Yes, we do that in my organization, for some applications, using tomcat (instead of WebSphere for other applications).
is that a normal practice to run docker for development only while using traditional none docker for production ?
There are many practices, depending on the organization goals, strategy and level of agility. Using Docker for development and not in production is the most use-case with Docker containers, nowadays, but the next level is to use a Docker engine in a production environment. See next section:
-if it is possible, what are the best practice to achieve that ?
The difficulty is that in a production environment, you need a system for automating deployment, scaling, and management of containerized applications.
Developers do not need that. So it is really easy for them to migrate to Docker (and it lets them do things easier and faster than without Docker).
In production, you should really consider using Kubernetes or OpenShift, instead of running a simple docker engine, like your developers do. But it is much more complicated than simply installing Docker on a single Windows or Linux host.
I have developed an application and am using docker to build it. I would like to ship it as a VMware OVF package. What are my options? How do I ship it so customer can deploy it in their VMware environment?
Also I am using a base Ubuntu image and installed NodeJS, MongoDB and other dependencies on it. But I would like to configure my NodeJS based application and MongoDB database as a service within the package I intend to ship. I know how to configure these as a service using init.d on a normal VM. How do I go about this in Docker? Should I have my init.d files in my application folder and copy them over to Docker container during build? Or are there better ways?
Appreciate any advise.
Update:
The reason I ask this question is - My target users need not know docker necessarily. The application should be easy to deploy for someone who do not have docker experience. With all services in a single VM makes it easy to troubleshoot issues. As in, all log files will be saved in the /var/log directory for different services and we can see status of all different services at once. Rather than the user having to look into each docker service. And probably troubleshooting issue with docker itself.
But at the same time I feel it convenient to build the application the docker way.
VMware vApps usually made of multiple VMs running to provide a service. They may have start up dependencies and etc.
Now Using docker you can have those VMs as containers running on a single docker host VM. So a single VM removes the need for vAPP.
On the other hand containerizing philosophy requires us to use Microservices. short explanation in your case, putting each service in a separate container. Then write up a docker compose file to bring the containers up and put it in start up. After that you can make an OVF of your docker host VM and ship it.
A better way in my opinion is to create docker images, put them in your repository and let the customers pull them. Then provide docker compose file for them.
Can anyone understand and explain the fundamental differences of Docker and Rocket?
I don't seem to get it.
Maybe it's just too new of a direction.
Hope someone can explain the fundamental pros and cons of Docker vs Rocket.
Thanks
https://coreos.com/blog/rocket/
https://github.com/coreos/rocket
Rocket is an "early stage" container manager, just like was Docker a few month ago. The new "container runtime" was lanched by CoreOS few days ago but the intresting thing is WHY?
Essentially CoreOS, as well as others Open Source developers and startups, says that Docker broke the idea of a “standard container”, simplicity and composability, they where started from, proved by the fact that they removed original shipping containers 'manifesto'.
By my side, I already saw many signals of that, starting from the "legal empowerment" of Docker brand, to the drop off LXC containers, the Linux open source tecnology at the base of Docker, which permitted their own climb since the very beginning. I shoot my first hot reaction here, in responce to the "counterattack" of Solomon Hykes (founder & creator of the Docker project) to CoreOS announcement.
That why I found quite curious (hilariously) the recent declaration of Solomon Hykes : "We're standing on the shoulders of giants"
They also have raised doubts about security and composability perspective:
From a security and composability perspective, the Docker process
model - where everything runs through a central daemon - is
fundamentally flawed. To “fix” Docker would essentially mean a rewrite
of the project, while inheriting all the baggage of the existing
implementation.
So what is Rocket at the end ?
Rocket is an alternative to the Docker runtime, designed for server
environments with the most rigorous security and production
requirements. Rocket is oriented around the App Container
specification, a new set of simple and open specifications for a
portable container format.
What's the difference with Docker ?
The promise foundation of freedom and industrial open standards like in DNS, HTTP, IMAP, SMTP, TCP/IP, ISO/OSI stack ... Internet? Or more a concrete security and composability perspective.
Have a look at the eclectic speaker and amazing developer Kelsey Hightower Rocket Tutorial & Demo.
ongoing UPDATE (SPECs - OPENSOURCE - VISION):
[MUST READ] Amazing nitty-gritty details about the matter of Docker flawed
universal toolkit for emulating Heroku, regardless of stack or container engine
Rocket & App Container Spec Overview
Not clear now, they just forked ;)
But Rocket want stick to pure unix philosophy
Unix philosophy: tools should be independently useful
Which implies that Docker is willing to pay less attention to this topic. To my opinion it was not the case till that moment, but yes Docker announced orchestration tools in the future...
CoreOS is building their own orchestration stack so they don't really need one of Docker.
Summing up: for now use Docker. And ask this question again in a year.
Rocket - uses systemd-nspawn ( they can also do exec of kvm )
There is also an intention to make rocket a generic framework to manage any virtualized environment that is shipped with coreOS
Docker - uses lxc ( inturn does clone ( namespace ) & pivot_root ) It starts with a base image that is read only and adds more images to it. It uses union mount to add more read-only filesystems on to the base root fs. It also implements copy on write. It starts with an empty read-write layer and if you write something to a file the file is first copied to the read-write layer. checkout aufs.
The net effect is very similar ( if both are configured to use containers) but the way apps are packaged and deployed are different.
rocket claims to provide better flexibility by providing app spec.
Docker provides easy/quick portable packaging and deployment.
Now (2020) Rocket is officially dead: https://github.com/rkt/rkt/issues/4024
After acquisition by Red Hot new owner concentrates efforts on https://podman.io/
I'm wondering how Docker can run RHEL)(2.6) on a debian host (assume docker run on Debian latest kernel 3.x kernel). How this dockers layering approach work here.As far as i knoW docker is using a concept called OS-level virtualization. So it adds layers or rings to the base image. but how it work with different kernel versions? and will there by any performance degradation ?
From the docs, docker is available only as part of RHEL7 onwards, (not sure about Debian). Linux Containers involves things like resource management, process isolation, and security. Some of the features use cgroups, namespaces, and SELinux which were already available earlier IMHU. Docker basically automates the deployment of applications inside these Containers, and provides the capability to package the runtime dependencies into a container.