Should we install a docker image from Docker Hub at the begining? - linux

I want to introduce docker for my development environment.
I wanted to create a docker image from a existing linux machine.
But,I could not find a official method on docker documentation.
https://docs.docker.com/learn/
(I know there are some ways on the Internet to create a docker image like converting .iso file to .tar.gz file.
However,it's not official)
After that,I installed a docker image of Debian OS from Docker Hub with 'docker pull' command.
However,I could not find a correct version of Debian OS I wanted.
So, to get a OS of a correct kernel verion and a correct Debian os version,
after I install a docker image from Docker Hub, should I customize it?
Is there any way as an official manner to create a docker image from a exisiting linux machine?

Sounds like you should be looking at Hashicorp's packer, it would allow you to build your own Docker base images, from whatever base you wish.
https://www.packer.io/docs/builders/docker.html

Related

Docker image with imported volume runs on Docker for mac, fails with "exec format error" not Docker CE on Linux

We're building an OS image using yocto on Debian which outputs a bzipped volume which we can use as a base image in docker using docker import and we push this image to our registry to use as a base image.
cp build/tmp/deploy/images/raspberrypi4/device.tar.bz2 .
docker import device.tar.bz2 registry/base_image
docker push registry/base_image
We include the base image as part of another docker image:
FROM registry/base_image
ADD target/app.jar app.jar
ADD docker-run.sh run.sh
ENTRYPOINT "./run.sh"
This image is then successfully built by our CI on a linux (Amazon Linux 2) agent, and pushed to the registry. I'm able to pull the image and run it on a Mac with the current version of Docker for Mac.
However, trying to run the same docker image on a linux machine (even on the same linux build agent) results in the following exec format error:
standard_init_linux.go:228: exec user process caused: exec format error
Using an alternative docker image as the base allows the entrypoint to execute, so I'm pretty sure the issue is related to our custom base image.
As docker is largely cross platform, I'm surprised it works on MacOS (intel and m1) but not Linux (tested in Ubuntu and Amazon Linux). I've tried both the Ubuntu and Docker hosted apt repositories for the docker install.
How can I further debug?
The issue here was that the base image was arm based, and that Docker for Mac can run arm images out of the box, even on intel machines.
https://docs.docker.com/desktop/multi-arch/
Docker Desktop provides binfmt_misc multi-architecture support, which means you can run containers for different Linux architectures such as arm, mips, ppc64le, and even s390x.
There's a good write up here for running arm docker images on linux x86 hosts
https://matchboxdorry.gitbooks.io/matchboxblog/content/blogs/build_and_run_arm_images.html
After installing QEMU on your host OS, you need to mount the QEMU binary:
docker run -it --name your-container-name -v /usr/bin/qemu-arm-static:/usr/bin/qemu-arm-static your-arm-image

How to distribute python3 code which contains external libraries

I wrote a small script in python3 that uses numpy, matplotlib, and other libraries used by pyCharm CE in my linux machine.
I used pyCharm to code and create the virtual env.
The script works only inside pyCharm because of the dependencies.
And a friend of mine wants to use my script in a windows machine. I'm not sure if even he has python installed.
How can I run my script outside pyCharm, or how can I activate the virtual env created by pyCharm to run the script?
And
How I can create a package or something to give the script to my friend or anyone else to freely use it?
Thanks
One way of going about to ask your friend to install python3.x and pip in his system. Meanwhile you create a requirements.txt which consists of the libraries that need to be installed and their versions in this format.
dj-database-url==0.5.0
Django==2.2.5
pytz==2019.2
sqlparse==0.3.0
psycopg2>=2.7,<3.0
Then ask your friend to run pip install -r <path to requirements.txt>. This will install all the required libraries and if there is no OS based dependencies then the project should run fine.
Another way of doing it in the case of bigger project where there are OS based dependencies is to use a containerization tool such as docker. Containerization lets you run projects, in other machines, which are dependent on various packages or environments which are available/installed in your machine.
For example: Imagine I created a python based application which is dependent on multiple packages in my Debian machine. I can build a docker image using python3.x as the base and install the required packages inside the image during the build time. It is fairly simple to do so. After doing so I can push the image to docker hub which is a registry to store docker images. Do mind that the images stored here are publicly available. If you are worried about that, you can use a private AWS ECR registry to store your images. Once I have pushed the image, anyone with access to the image can pull it and spin up a container. A container is an instance of an image which can run the applications/scripts/anything that the image is built to do. In order to be able to spin up containers they will need docker installed in their machine.
This way you can share your project and make it run in anyone's machine with as little hassle as possible. They will not need anything other than docker installed in their machine. Unlike Virtual Machine docker containers are not heavy on your machine.
In your case using docker you can build an image (much like an ISO image) with python3.x as base and install all the required packages such as numpy, matplotlib and other libraries, then copy the scripts required for the project to run into the image and push it to docker hub or a private registry of your choice. Then you can give your friend an access to the image. Your friend will need Docker for Windows installed in his machine in order to be able to spin up a container using the image you provided him with. This container will have your script running as it will have all the required dependencies installed in it by you while building the image itself.
For more info on Docker: https://www.docker.com/

Windows docker container vs Linux docker container

I am new to the Docker/Kubernetes world in general.
As i am just starting with the whole architecture i have the following:
Azure Container Service -- up and running using Linux for master and agents
Docker for windows on my machine -- up and running
automated build for a .NetCore application on VSTS using Docker task to build and push the image to the Azure Container registry
Kubectl running as well kubernetes UI from master nodes
when building the .NetCore application from my machine using Docker commands and then publish it to the registry, Kubernetes is able to pull it and run it, but when the image is built and pushed by the VSTS build tasks kubernetes is failing to pull the images. after researching the error a bit, it turned out that the image coming from the VSTS build is made for windows and therefor cannot be pulled.
What is the difference between Docker Container for Windows and Linux, and how can we convert or specify the type while building the image.
What is the difference between Docker Container for Windows and Linux.
In short, Linux docker run on Linux and work with Linux kernel, Windows docker run on windows server 2016 or windows 10, work with windows hyper virtualization, and request windows dll and other packages.
So we can't run windows docker image on Linux docker.
how can we convert or specify the type while building the image.
As far as I know, we can't convert the docker image to another type.
We can use VSTS to build .NET core Linux Docker image, please refer to this blog.
Also, we can use different docker platform use Docker file to build docker image.
More information about how to build windows docker image with docker file, please refer to this blog.
More information about how to build Linux docker image with docker file, please refer to this article.
After docker image created, we can follow this article to push and pull an image from Azure container registry.

Running windows container on linux host

I installed docker CE on an Ubuntu 16.04, and when I try:
docker pull microsoft/windowsservercore
It fails:
Using default tag: latest
latest: Pulling from microsoft/windowsservercore
3889bb8d808b: Pulling fs layer
da87b55a9b63: Pulling fs layer
image operating system "windows" cannot be used on this platform
Is it possible to run a windows image on a linux host with docker? If yes, how should I do that?
Unlike other Virtual computing systems like Virtual Box or VMWare, Docker images are small and not fully complete stand alone entities.
The Docker image does not include the OS kernel, instead it uses the kernel of the Docker host
This makes the images very light weight, but it also means you won't be able to run a Windows image on a Linux host.

linux machine using on docker for windows

I have windows server 2016 which has docker in built into it... so I am able to create windows image based containers and play around with them.
But now I want to create run linux based images, not able to do that... get below error
PS C:\Users\harishr> docker pull hello-world
Using default tag: latest
latest: Pulling from library/hello-world
image operating system "linux" cannot be used on this platform
I did installed docker-machine to create linux machine, but not getting command line options to do that

Resources