I am moving my windows hosted SPA app into a Linux container. I am somewhat familiar with Ubuntu, so I was going to use that.
The NodeJs page on docker hub shows containers for several Debian versions and Alpine.
But nothing for Ubuntu.
Is Ubuntu not recommended for use with NodeJs by the NodeJs Team?
Or is it just too much work to keep lots of Linux distros of NodeJs preped, so the Node team stopped at Debian and Alpine?
Or is there some other reason?....
Ubuntu is too heavy to have it as a base container for running a node application as a server. Debian and Alpine are much more lightweight compared to Ubuntu.
Just on top of that, having some knowledge of Ubuntu, debian and alpine wouldn't be a big change. At the end of the day Ubuntu is somewhat built on top of debian, and they're linux distros so you should be fine. Especially that you'd need to do your configure steps ones, save them as part of the container image and you're done. Every time it will make the same container with the right setup. The beauty of containers.
Ubuntu is just a really heavy base and is going to add a ton of packages into the container that most likely are unnecessary. If you're going to be building production grade containers, Alpine is usually the go-to. It has a minimal amount of libraries installed, reducing the overall size of the container, and should be closest to "bare-minimum" that your application needs to run. I'd start there.
Related
Little intro:
I have two OS on my pc. Linux and Windows. I need Linux for work, but it freezes on my pc, but windows does not. I've heard that is a common thing for ASRock motherboards.
That's why i want to switch to Windows for work.
So my idea was to create docker image with everything i need for work, such as yarn, make, and a lot of other stuff, and run it on windows for using linux functionality. You got the idea.
I know that docker is designed to only do one thing per image, but i gave this a shot.
But there are problems constantly. For example right now i'm trying to install nvm on my image, but, after building the image, command 'nvm' is not found on bash. It is a known problem and running source ~/.profile adds the command in console, but running it while building the image doesnt affect your console when you run this image. So you need to do that manually every time you use this image.
People suggest putting this in .bashrc which gives segmentation error.
And that's just my problem for today, but i've encountered many more, as i've been trying creating this image for a couple of days already.
So my question is basically this: is it possible to create fully operational OS in one docker image, or maybe one could connect multiple images to create OS, or do i just need to stop that and use a virtual machine like a sensible person?
I would recommend using a virtual machine for your use-case. Since you will be using this for work and modifying settings, and installing new software, these operations are better suited to be in a virtual machine where it is expected that you change the state or configurations.
In contrast, Docker containers are generally meant to be immutable, as in the running instance of the image should not be altered or configured. This is so that others can pull down the image and it works "out-of-the-box." Additionally, most Docker containers available on Docker Hub are made to be lean, with only one or two use cases in mind and not extra (for security purposes and image size), so I expect that you would frequently run into problems trying to essentially set up a Docker image that you would be working on. Lastly, since it is not done frequently, there would be less help available online, and Docker-level virtualization does not really suit your situation.
Currently learning docker and containerization, I get a little confused by the term of "moby runtime".
For my unterstanding, the whole docker has been split up in several libraries / tools / components allowing developers to build their own version of docker using the moby runtime.
Is this assumption correct?
What exactly is the relationship between moby runtime and e.g. the docker for desktop I download on my windows machine if I use the official docker page?
Why does e.g. Microsoft use the moby runtime to run some services like IoT Edge instead of the official docker build? Do they use their customized version of docker?
Yes, I think your unstanding is correct.
From official web site:
Moby is an open framework created by Docker to assemble specialized container systems without reinventing the wheel. It provides a “lego set” of dozens of standard components and a framework for assembling them into custom platforms. At the core of Moby is a framework to assemble specialized container systems which provides: Components, Tools, Assemblies.
It also said:
Moby IS RECOMMENDED for anyone who wants to assemble a container-based system: Hackers who want to customize or patch their Docker build.
And next digram may make you even clear:
From this you can see, you could start your own project just like Docker CE, Docker EE based on moby project. And here is a good article I think explain it clearly. Also this from official guys response for some relationship.
Moby is a bit of an overused name from Docker. In addition to being the name of one of their mascots (Moby is the blue whale you often see in the logos), Moby is:
An upstream open source project that Docker has given to the community. This gives separation from the closed source portions of Docker and the parts with Docker's trademark attached. You can see these projects in their Github repositories. You can think about the Moby Project the same way you think of Fedora as the upstream for RedHat, Docker does most of their development in the Moby Project repos and packages specific releases from there with the Docker name that you see as Docker CE and Docker EE. Some projects may live here forever, but Docker also strives to move these further upstream to be managed by external organizations, e.g. containerd and notary have both been transitioned to the Linux Foundation.
It is the repository name that was formerly docker/docker, now moved to moby/moby. This is the core of the docker engine.
It is a virtual machine that is packaged using LinuxKit. This VM is a minimal environment to run docker containers, and well suited for running on desktop and embedded environments where you don't want to manage the VM itself.
The latter is most likely what you are thinking of by "Moby Runtime". A VM is needed to run Linux containers on a Windows or Mac environment (docker containers depend on a lot of kernel functionality that would not be easy to emulate). You can even see examples for building similar VM's in the LinuxKit examples. Inside of that VM is the same Docker CE engine that is installed natively on Linux host. And the VM itself is built and maintained by Docker.
I have not used Docker before at all, but I have a flask app running on an Azure server right now which I would like the mostly replicate to another server.
Ubuntu 16.10
Anaconda for my Python environments
A few systemd files to configure nginx and uwsgi
My goal is to start fresh without having to do a fresh install of the OS (I do not have the ability to do this) on my current server. I have a few issues with environments and multiple Python versions which I would like to escape from.
I would then like to take this set up and send it over to another server which is completely fresh (a brand new Azure instance which hasn't been touched yet). Is this possible with Docker?
To make things clear, Docker is not a technolgy to migrate applications from one server to another. Docker is a "vitualization" technology which allows you to isolate applications when they are running. Once you have this isolation, the Docker containers can be migrated to any server having just Docker installed. Thus you releive yourself from issues like "It works on this machine, but it doesn't work on that".
In order to do that, you need first to Dockerize your application. Your requirements are very common, and there are many samples online of how to containarize such applications.
However, you need first to learn about Docker to get started (which need a couple of hours/days). You can start learning about Docker here. Once you have your application dockerized and working on one machine, moving it to another server is a piece of cake.
I am very new to Docker.
I am using mac OS and i have a local Jenkins server up and running. I would like to simulate the red hat Linux environment by using docker.
I am supposed to perform the following steps,
Get the docker image for RHEL - (Red Hat Enterprise Linux)
From where shall i get docker image for RHEL?
Pull the images for JDK, Jenkins.
Run the jenkins server and set up the new jobs
Are the above steps correct?
Am i going in the correct direction?
You are going to need to create a custom Dockerfile as by default Jenkins runs on Alpine or the OpenJDK image. The OpenJDK image, in turn is based upon a specific version of buildpack-deps that is based upon Debian. While you can create a dockerfile with multiple FROM statements, it is buggy and I wouldn't recommend it.
Get the docker image for RHEL - (Red Hat Enterprise Linux) From where shall i get docker image for RHEL?
Based upon what I could find (I reserve the right to be Wrong on this, since I don't typically use RHEL in a docker environment), there is no official RHEL image for docker. That article goes over how to create one. Alternatively, there are centos images, so that would likely be your best course. If there is no compromising the official RHEL, you are stuck doing what that article states or trusting an unofficial image made by Some Person.
Pull the images for JDK, Jenkins.
Once you've sorted the above out, you'll need to start building the Dockerfile to accommodate Jenkins. I would look at the OpenJDK8 and Jenkins Dockerfile port it (to the extent it is able) to Redhat/Centos
Run the jenkins server and set up the new jobs
That would be easy once the other portions are done. You can follow the directions at the official Jenkins Docker hub
Am i going in the correct direction?
Not in my opinion. Docker shouldn't really be used to test environments like that. First, it won't be a real good test since it's effectively a chroot/"special" environment if you are expecting to run this on non-docker environment. If you are expecting to run it in docker, you are re-inventing the wheel by making the wheel square, since you already have an official Jenkins image.
I'm wondering how Docker can run RHEL)(2.6) on a debian host (assume docker run on Debian latest kernel 3.x kernel). How this dockers layering approach work here.As far as i knoW docker is using a concept called OS-level virtualization. So it adds layers or rings to the base image. but how it work with different kernel versions? and will there by any performance degradation ?
From the docs, docker is available only as part of RHEL7 onwards, (not sure about Debian). Linux Containers involves things like resource management, process isolation, and security. Some of the features use cgroups, namespaces, and SELinux which were already available earlier IMHU. Docker basically automates the deployment of applications inside these Containers, and provides the capability to package the runtime dependencies into a container.