Running jenkins on docker to simulate red hat linux - linux

I am very new to Docker.
I am using mac OS and i have a local Jenkins server up and running. I would like to simulate the red hat Linux environment by using docker.
I am supposed to perform the following steps,
Get the docker image for RHEL - (Red Hat Enterprise Linux)
From where shall i get docker image for RHEL?
Pull the images for JDK, Jenkins.
Run the jenkins server and set up the new jobs
Are the above steps correct?
Am i going in the correct direction?

You are going to need to create a custom Dockerfile as by default Jenkins runs on Alpine or the OpenJDK image. The OpenJDK image, in turn is based upon a specific version of buildpack-deps that is based upon Debian. While you can create a dockerfile with multiple FROM statements, it is buggy and I wouldn't recommend it.
Get the docker image for RHEL - (Red Hat Enterprise Linux) From where shall i get docker image for RHEL?
Based upon what I could find (I reserve the right to be Wrong on this, since I don't typically use RHEL in a docker environment), there is no official RHEL image for docker. That article goes over how to create one. Alternatively, there are centos images, so that would likely be your best course. If there is no compromising the official RHEL, you are stuck doing what that article states or trusting an unofficial image made by Some Person.
Pull the images for JDK, Jenkins.
Once you've sorted the above out, you'll need to start building the Dockerfile to accommodate Jenkins. I would look at the OpenJDK8 and Jenkins Dockerfile port it (to the extent it is able) to Redhat/Centos
Run the jenkins server and set up the new jobs
That would be easy once the other portions are done. You can follow the directions at the official Jenkins Docker hub
Am i going in the correct direction?
Not in my opinion. Docker shouldn't really be used to test environments like that. First, it won't be a real good test since it's effectively a chroot/"special" environment if you are expecting to run this on non-docker environment. If you are expecting to run it in docker, you are re-inventing the wheel by making the wheel square, since you already have an official Jenkins image.

Related

NodeJs on an Ubuntu docker container?

I am moving my windows hosted SPA app into a Linux container. I am somewhat familiar with Ubuntu, so I was going to use that.
The NodeJs page on docker hub shows containers for several Debian versions and Alpine.
But nothing for Ubuntu.
Is Ubuntu not recommended for use with NodeJs by the NodeJs Team?
Or is it just too much work to keep lots of Linux distros of NodeJs preped, so the Node team stopped at Debian and Alpine?
Or is there some other reason?....
Ubuntu is too heavy to have it as a base container for running a node application as a server. Debian and Alpine are much more lightweight compared to Ubuntu.
Just on top of that, having some knowledge of Ubuntu, debian and alpine wouldn't be a big change. At the end of the day Ubuntu is somewhat built on top of debian, and they're linux distros so you should be fine. Especially that you'd need to do your configure steps ones, save them as part of the container image and you're done. Every time it will make the same container with the right setup. The beauty of containers.
Ubuntu is just a really heavy base and is going to add a ton of packages into the container that most likely are unnecessary. If you're going to be building production grade containers, Alpine is usually the go-to. It has a minimal amount of libraries installed, reducing the overall size of the container, and should be closest to "bare-minimum" that your application needs to run. I'd start there.

Difference between Installing Lamp within Linux Distro Container vs Installing as Separately Containers?

I am newbie with docker, i need some clarification here i am trying to explain
lets say i have Windows Machine and docker desktop installed on it.
what will be the structure may i need to first run Some Linux Distro Container and within that container i will install LAMP Server? or i will parallel create Apache Container MySQL Container and Linux Container?
Secondly i noticed that there are some wordpress containers which is totally confusing because to run wordpress defiantly i need LAMP, then how this architecture will work?
Will it be like:
1 Linux Container and then i will install LAMP on it and install wordpress?
But incase of this what will be the purpose of wordpress container?
Or
1 Linux Container
1 Apache Container
1 MySQL Container
1 Wordpress Container
and all of them will be interlinked??
i am too confused please help me
In general you will try to have 1 container = 1 service/ 1 purpose and keep the containers very small.
That means you will have your MySQL in one container and your Apache Server on another container. They will run on container-linux based (here you can go and read about docker and its layering technique).
Coming back to your architecture, you need to put the Wordpress somewhere, where a server is - because without a server the software has no power to do anything, that means you will put it on the Apache container, eventually you will want to volume (check docker docs) to persist your statical data.
Lastly, you will want to connect this container with the MySQL container to be able to persist the important data there. You can do that with docker-compose (see docs) and start both containers from one command.
Now the cool part: this is already done for you here bitnami/wordpress and I am sure you can find a lot more on docker hub.

What is the moby runtime?

Currently learning docker and containerization, I get a little confused by the term of "moby runtime".
For my unterstanding, the whole docker has been split up in several libraries / tools / components allowing developers to build their own version of docker using the moby runtime.
Is this assumption correct?
What exactly is the relationship between moby runtime and e.g. the docker for desktop I download on my windows machine if I use the official docker page?
Why does e.g. Microsoft use the moby runtime to run some services like IoT Edge instead of the official docker build? Do they use their customized version of docker?
Yes, I think your unstanding is correct.
From official web site:
Moby is an open framework created by Docker to assemble specialized container systems without reinventing the wheel. It provides a “lego set” of dozens of standard components and a framework for assembling them into custom platforms. At the core of Moby is a framework to assemble specialized container systems which provides: Components, Tools, Assemblies.
It also said:
Moby IS RECOMMENDED for anyone who wants to assemble a container-based system: Hackers who want to customize or patch their Docker build.
And next digram may make you even clear:
From this you can see, you could start your own project just like Docker CE, Docker EE based on moby project. And here is a good article I think explain it clearly. Also this from official guys response for some relationship.
Moby is a bit of an overused name from Docker. In addition to being the name of one of their mascots (Moby is the blue whale you often see in the logos), Moby is:
An upstream open source project that Docker has given to the community. This gives separation from the closed source portions of Docker and the parts with Docker's trademark attached. You can see these projects in their Github repositories. You can think about the Moby Project the same way you think of Fedora as the upstream for RedHat, Docker does most of their development in the Moby Project repos and packages specific releases from there with the Docker name that you see as Docker CE and Docker EE. Some projects may live here forever, but Docker also strives to move these further upstream to be managed by external organizations, e.g. containerd and notary have both been transitioned to the Linux Foundation.
It is the repository name that was formerly docker/docker, now moved to moby/moby. This is the core of the docker engine.
It is a virtual machine that is packaged using LinuxKit. This VM is a minimal environment to run docker containers, and well suited for running on desktop and embedded environments where you don't want to manage the VM itself.
The latter is most likely what you are thinking of by "Moby Runtime". A VM is needed to run Linux containers on a Windows or Mac environment (docker containers depend on a lot of kernel functionality that would not be easy to emulate). You can even see examples for building similar VM's in the LinuxKit examples. Inside of that VM is the same Docker CE engine that is installed natively on Linux host. And the VM itself is built and maintained by Docker.

Simple Docker Concept

I'm going through the getting started with Docker guide and understood most of the basics except for one concept.
I get how docker/whalesay takes up 247 MB. It needs to download a few layers, including a base image of ubuntu. But hello-world should be around the same size? It's a self-contained image that can be shipped anywhere.
When hello-world executes, there's still a Linux layer running it somewhere, and I also downloaded hello-world before docker/whalesay so it couldn't have been using the Linux layer downloaded from docker/whalesay. What am I missing here?
It is not an ubuntu instance. Check the hub:
https://hub.docker.com/_/hello-world/
Here if you click on latest, you can see the dockerfile:
FROM scratch
COPY hello /
CMD ["/hello"]
The FROM defines which operating system it is based on. Scratch is an "empty" image, as described here: https://hub.docker.com/_/scratch/
Looking into Dockerfile clears things up - it's not using any base image i.e Ubuntu, etc:
FROM scratch
COPY hello /
CMD ["/hello"]
The first directive FROM states the base image for the new image we intend to build. From the docs:
The FROM instruction sets the Base Image for subsequent instructions.
As such, a valid Dockerfile must have FROM as its first instruction.
The image can be any valid image – it is especially easy to start by
pulling an image from the Public Repositories. (Docker Hub)
And FROM scratch (no way it is using any base image, hence the mini image size) is a special case - the term scratch is reserved - from the docs:
FROM scratch
This image is most useful in the context of building base images (such
as debian and busybox) or super minimal images (that contain only a
single binary and whatever it requires, such as hello-world).
also
As of Docker 1.5.0 (specifically, docker/docker#8827), FROM scratch is
a no-op in the Dockerfile, and will not create an extra layer in your
image (so a previously 2-layer image will be a 1-layer image instead).
EDIT 1 - OP's new comment to clarify it further:
To clarify, there's a very minimal Linux dist installed with Docker.
And this incredibly simple hello-world image uses that default Linux
dist that comes with Docker?
A good clarification by Paul Becotte:
No. Docker does not contain a kernel- it is not a virtual machine. It
is a way to run processes on your existing kernel in such a way as to
trick them into thinking they are completely isolated. The size of the
image is actually a "root file system" ... in this case, the file
system contains only a single file, which is why it is small. The
process actually gets executed on the kernel that is running the
Docker Daemon (you Linux machine on which you installed Docker), with it chroot'ed to the container filesystem.
To clarify it further - I am sharing an example of using a minimal image Alpine:
A minimal Docker image based on Alpine Linux with a complete package
index and only 5 MB in size!
P.S. In case of hello-world there isn't any base image not even a minimlistic one.

Docker Help : Creating Dockerfile and Image for Node.js App

I am new docker and followed the tutorials on docker's website for installing boot2docker locally and building my own images for Node apps using their tutorial (https://docs.docker.com/examples/nodejs_web_app/). I was able to successfully complete this but I have the following questions:
(1) Should I be using these Node Docker images (https://registry.hub.docker.com/_/node/) instead of CentOS6 for the base of my Docker Image? I am guessing the Docker tutorial is out of date?
(2) If I should be basing from the Node Docker Images, does anyone have any thoughts on whether the Slim vs Regular Official Node Image is better to use. I would assume slim would be the best choice but I am confused on why multiple versions exist.
(3) I don't want my Docker Images to include my Node.JS app source files directly in the image and thus have to re-create my images on every commit. Instead I want running my Docker Container to pull the source from my private Git Repository upon starting for a specific commit. Is this possible? Could I use something like entrypoint to specify my credentials and commit when running the Docker Container so it then would run a shell script to pull the code and then start the node app?
(4) I may end up running multiple different Docker Containers on the same EC2 hosts. I imagine making sure the containers are all based off of the same Linux distro would be preferred? This would prohibit me from downloading multiple versions when first starting the instance and running the different containers?
Thanks!
It would have been best to ask 4 separate questions rather than put this all into one question. But:
1) Yes, use the Node image.
2) The "regular" image includes various development libraries that aren't in the slim image. Use the regular image if you need these libraries, otherwise use slim. More information on the libraries is here https://registry.hub.docker.com/_/buildpack-deps/
3) You would probably be better off by putting the code into a data-container that you add to the container with --volumes-from. You can find more information on this technique here: https://docs.docker.com/userguide/dockervolumes/
4) I don't understand this question. Note that amazon now have a container offering: https://aws.amazon.com/ecs/

Resources