how docker works if the OS environment changes? - linux

I am very new to docker. I have some very basic doubts on docker. suppose I am cloning a simple project from github. I want to create a docker image of that application. according to my OS(currently I am using linux), I am writing a docker file and building one image of that application, and from docker image I am creating a docker container. now my container is created. Now suppose my another team mate wants to deploy that docker image to other system of Windows/mac OS. now what are the procedures? means do I need to write the docker file again? or I need to pull the app from github again and following the same steps that I mentioned above? because in my dockerfile I have given commands according to my OS, but somebody wants to deploy the docker image in windows/mac.
And secondly, where is the image file located? in my local system it will not be there I know. how can I see the files/folders of the docker image?
I know this is very simple question to ask, still any help is highly appreciable.Thanks.

suppose I am cloning a simple project from github. I want to create a docker image of that application. according to my OS(currently I am using linux), I am writing a docker file and building one image of that application, and from docker image I am creating a docker container. now my container is created.
Just to be sure that this is clear. You have to consider the "Docker Image" as "a recipe" and a "Docker Container" as "a cake". You can make as many cakes as you like with a given recipe. The recipe is what you share if you want to be able to re-bake cakes.
Now suppose my another team mate wants to deploy that docker image to other system of Windows/mac OS. now what are the procedures? means do I need to write the docker file again? or I need to pull the app from github again and following the same steps that I mentioned above?
And thus it's the "image" that you will "share" with other developers and not the container. This can be done either by "pushing" the image to an online repository (e.g. https://hub.docker.com/) or by recreating the image every time from a Dockerfile.
because in my dockerfile I have given commands according to my OS, but somebody wants to deploy the docker image in windows/mac.
I would have to see what exactly you are doing, but it's good practice to make docker images independent from the host. Or at least make it configurable during the first creation of the image or execution of the container.
To give a concrete example, in our company we have a private rest api written in PHP. Everything runs on docker, whether it is on development or on production. Our production images can be run on any OS, however our dev image will be built slightly differently depending on the OS. Why? Because we need to configure the debugger.
If the image is built on Linux, the php setting xdebug.remote_host needs to point to localhost, however when using Docker For Mac, then the php setting needs to be docker.for.mac.localhost .
The Dockerfile looks partially like this:
FROM adsdaq/print-engine-fpm:7.3
ARG DOCKER_HOST_ADDR
ENV PHP_XDEBUG_REMOTE_HOST ${DOCKER_HOST_ADDR:-localhost}
COPY etc/dev/php/adsdaq.ini $PHP_INI_DIR/conf.d/
And in the adsdaq.ini we have
xdebug.remote_host = ${PHP_XDEBUG_REMOTE_HOST}
And to simplify the life of our devs, we have a Makefile which handles OS detection:
DOCKER_HOST ?= localhost
OPEN_BROWSER ?= open
UNAME_S := $(shell uname -s)
USERID=$(shell id -u)
GROUPID=$(shell id -g)
## Define variable depending on OS used, use xdg-open command
ifeq ($(UNAME_S),Linux)
OPEN_BROWSER = xdg-open
else ifeq ($(UNAME_S),Darwin)
ifneq (,$(wildcard /var/run/docker.sock))
DOCKER_HOST = docker.for.mac.localhost
endif
else
$(warning Your OS "$(UNAME_S)" is not supported and could not work as expected!)
endif
As showed here, the image will be built differently on Linux than on Mac OS for dev purposes, and that is fine as we don't need to push those images on any repo manager.
If you need to share the image on a repo, then I would make sure that the configuration can be changed dynamically through parameters and/or eventually the usage of an entrypoint script.
And secondly, where is the image file located? in my local system it will not be there I know. how can I see the files/folders of the docker image?
You cannot see the files/folders of the docker image. To see what's in the image you need to run a container as it will give you an instance of it! Remember the cake/recipe analogy .. you cannot see the content of the cake until you baked it using the recipe.
You can however see all images "stored" on your machine by doing docker images .
Hope this helps figuring things out. Don't hesitate to share your Dockerfile if you need more assistance.

You don't need to change anything. Lets say your current host is Linux on which you are running docker and you made a dockerfile. Now lets say you went to another PC which is running on windows. If docker is running on this windows and you want to build/run or whatever using your image file then you can do that without changing anything.
For more detailed/specific answer you will have to post the dockerfile

Related

How to develop node.js apps with docker on Windows?

I am developing a nodejs app using windows 10 WSL with remote container in visual studio code.
What are the best practices for Dockerfile and docker-compose.yml at this time?
Since we are in the development phase, we don't want to COPY or ADD the program source code in the Dockerfile (it's not practical to recreate the image every time we change one line).
I use docker compose to bind the folder with the source code on the windows side with volume, but in that case, the source code folder and the set of files from the Docker container will all have Root permission.
In the Docker container, node.js runs as node general user.
For the above reasons, node.js will not have write permission to the folders you bind.
Please let me know how to solve this problem.
I found a way to specify UID or GUID, but I could not specify UID or GID because I am binding from windows.
You can optionally mount Node code using NFS in Docker-compose

How to exclude the VENV in Docker PUSH

I have a basic Python script that I would like to containerize. As part of the script, I am to pip install az.cli which is almost 500 mb in size. Locally, it works all right. When I docker build it, and docker run it, it works just as how it's supposed to work. The issue is when I want to docker push it (to dockerhub for now).
It's packaging the entire project with the venv, that's ~550 mb. I'd like to avoid that, if possible. I added the venv directory in the .dockerignore file, but that doesn't seem to help. I know it's pushing the whole container to dockerhub, so I guess, esentially, is there a way to build/run the docker application without the essence az.cli baked in?
FYI: I am new to Docker, so what I am asking may not make sense.

How to list all files accessed after running docker?

I have to deal with some very large vendor support packages for embedded development —- I’ve used docker successfully just as a means of keeping their installs segmented away from the rest of my system and for the sake of environment reproducibility. That works great, but often these installs are monoliths, including a ton of files and functionality I don’t need, especially in a CI environment. And moving giant, slow-to-recreate docker images around is a pain.
So, in the interest of teasing out just the features I need, and porting them to a much smaller image, I’m wondering:
Can I run a docker image, performing some CI-relevant task, and then find all the files that were accessed in the duration the docker image was running?
The plan after that would be to copy all those files into a tarfile or similar, then use that for specialized images in the future. So as an alternative question... is that plan worth pursuit?
Thanks :) -Chloë
Maybe it will not answer exactly to your question, however it may help.
You can check what is happening in the container by
checking its logs through the docker container logs command.
checking the modification performed in its filesystem through the docker diff command.
Here is an example
# run a ubuntu container
$ docker run -it --rm --name focal ubuntu:focal
# run a command in the container
$ echo "test" > test.txt
# messages in the logs
$ docker container logs --follow --details focal
# root#aa86b4988bfe:/# echo "test" > test.txt
# checking the differences
$ docker diff focal
# A /test.txt

Cannot install inside docker container

I'm quite new at docker, but I'm facing a problem I have no idea how to solve it.
I have a jenkins (docker) image running and everything was fine. A few days ago I created a job so I can run my nodejs tests every time a pull request is made. one of the job's build steps is to run npm install. And the job is constantly failing with this error:
tar (child): bzip2: Cannot exec: No such file or directory
So, I know that I have to install bzip2 inside the jenkins container, but how do I do that? I've already tried to run docker run jenkins bash -c "sudo apt-get bzip2" but I got: bash: sudo: command not found.
With that said, how can I do that?
Thanks in advance.
Answer to this lies inside the philosophy of dcoker containers. Docker containers are/should be immutable. So, this is what you can try to fix this issue.
Treat your base image i.e, jenkins as starting point.
login to this base image and install bzip2.
commit these changes and this should result in a new image.
Now use above image from step 3 to install any other package like npm.
Now commit above image.
Note: To execute commands in much controlled way, I always prefer to use something like this;
docker exec -it jenkins bash
In nutshell, answer to both of your current issues lie in the fact that images are immutable so to make any change that will get propagated is to commit them and use newly created image to make further changes. I hope this helps.
Lots of issues here, but the biggest one is that you need to build your images with the tools you need rather than installing inside of a running container. As techtrainer mentions, images are immutable and don't change (at least from your running container), and containers are disposable (so any changes you make inside them are lost when you restart them unless your data is stored outside the container in a volume).
I do disagree with techtrainer on making your changes in a container and committing them to an image with docker commit. This will work, but it's the hand built method that is very error prone and not easily reproduced. Instead, you should leverage a Dockerfile and use docker build. You can either modify the jenkins image you're using by directly modifying it's Dockerfile, or you can create a child image that is FROM jenkins:latest.
When modifying this image, the Jenkins image is configured to run as the user "jenkins", so you'll need to switch to root to perform your application installs. The "sudo" app is not included in most images, but external to the container, you can run docker commands as any user. From the cli, that's as easy as docker run -u root .... And inside your Dockerfile, you just need a USER root at the top and then USER jenkins at the end.
One last piece of advice is to not run your builds directly on the jenkins container, but rather run agents with your needed build tools that you can upgrade independently from the jenkins container. It's much more flexible, allows you to have multiple environments with only the tools needed for that environment, and if you scale this up, you can use a plugin to spin up agents on demand so you could have hundreds of possible agents to use and only be running a handful of them concurrently.

How to deploy a Docker image to make changes in the local environment?

EDIT +2=Just fyi, i am a root user which means i do not have type out superuser do (sudo) every time i do a authorized only cmd.
Alright so after about 24 hours of researching Docker i am a little upset if i got my facts straight.
As a quick recap, docker serves as a way to write code or configuration file changes for a specific web service, run environment, virtual machines, all from the cozy confines of a linux terminal/text file. This is beyond a doubt an amazing feature: to have code or builds you made on one computer work on an unlimited number of other machines is truly a breakthrough. While i am annoyed that the terminology is wrong with respect to whats containers and what are images (images are save points of layers of code that are made from dockers servers or can be created from containers which require a base image to go off of. Dockerfiles serve as a way to automate the build process of making images by running all the desired layers and roll them into one image so it can be accessed easily.).
See the catch is with docker is that "sure it can be deployed on a variety of different operating systems and use their respective commands". But those commands do not really come to pass on say something like the local environment though. While running some tests on a dockerbuild working with centos, the basic command structure goes
FROM centos
RUN yum search epel
RUN yum install -y epel-release.noarch
RUN echo epel installed!
So this works within the docker build and says it succesfully installs it.
The same can be said with ubuntu by running an apt-cache instead of yum. But going back to the centos VM, it DOES NOT state that epel has been installed because when attempting to run the command of
yum remove epel-release.noarch
it says "no packages were to be removed yet there is a package named ...". So then, if docker is able to be multi-platform why can it not actually create those changes on the local platform/image we are targeting? The docker builds run a simulation of what is going to happen on that particular environment but i can not seem to make it come to pass. This just defeats one of my intended purposes of the docker if it can not change anything local to the system one is using, unless i am missing something.
Please let me know if anyone has a solution to this dilemma.
EDIT +1=Ok so i figured out yesterday what i was trying to do was to view and modify the container which can be done by doing either docker logs containerID or docker run -t -i img /bin/sh which would put me into an interactive shell to make container changes there. Still, i want to know if theres a way to make docker comunicate to the local environment from within a container.
So, I think you may have largely missed the point behind Docker, which is the management of containers that are intentionally isolated from your local environment. The idea is that you create containerized applications that can be run on any Docker host without needing to worry about the particular OS installed or configuration of the host machine.
That said, there are a variety of ways to break this isolation if that's really what you want to do.
You can start a container with --net=host (and probably --privileged) if you want to be able to modify the host network configuration (including interface addresses, routing tables, iptables rules, etc).
You can parts of (or all of) the host filesystem as volumes inside the container using the -v command line option. For example, docker run -v /:/host ... would expose the root of your host filesystem as /host inside the container.
Normally, Docker containers have their own PID namespace, which means that processes on the host are not visible inside the container. You can run a container in the host PID namespace by using --pid=host.
You can combine these various options to provide as much or as little access to the host as you need to accomplish your particular task.
If all you're trying to do is install packages on the host, a container is probably the wrong tool for the job.

Resources