Docker compose file for linux Debian and Arch - linux

Hope I'm doing this correctly...
First of we are using docker-compose with a yml file.
Looking something like that:
sudo docker-compose -f docker-compose.yml up -d
In the yml file we have something similar to:
version: '3.4'
services:
MyContainer:
image: "MyContainer:latest"
container_name: MyContainer
restart: always
environment:
- DISPLAY=unix$DISPLAY
- QT_X11_NO_MITSHM=1
devices:
- /dev/dri:/dev/dri
volumes:
- /tmp/.X11-unix:/tmp/.X11-unix:rw
- /dev/dri:/dev/dri
- /usr/lib/x86_64-linux-gnu/libXv.so.1:/usr/lib/x86_64-linux-gnu/libXv.so.1:rw
- ~/MyFiles/:/root/Myfiles
- ~:/root/home
Now the problem starts. We have two operating systems used by the team. One time Ubuntu and then Arch and Manjaro. As a experienced Linux User might know this will not work on Arch. Because x86_64-linux-gnu is a folder in Ubuntu. This is a very specific folder on Debian/Ubuntu systems. The equivalent on Arch/Manjaro and nearly every other Linux Distro is /usr/lib or /usr/lib64.
Of course a hack would be to make this folder to link into lib, but I don't want to do that for every new team-member/machine without Ubuntu.
So these are all the upfront information to give.
My Question is:
What is the best approach in your opinion to solve this problem?
I had a google search, but either I used the wrong keywords, or people don't have that problem, because they design their containers smarter.
I know that there are docker volumes that can be created and then used in the docker-compose file but for that, we would need to rerun the setup on all PC's, Laptops and Servers we have, would like to avoid that if possible...
I have a lot to learn so if you have more experience and knowledge please be so kind and explain me my mistakes.
Regards,
Stefan

If you're trying to use the host display, host libraries, host filesystem, and host hardware devices, then the only thing you're getting out of Docker is an inconvenient packaging mechanism that requires root privileges to run. It'd be significantly easier to build a binary and run the application directly on the host.
If you must run this in Docker, the image should be self-contained: all of the code and libraries necessary to run the application needs to be in the image and copied in the Dockerfile. Most images start FROM some Linux distribution (maybe indirectly through a language runtime) and so you need to install the required libraries using its package manager.
FROM ubuntu:18.04
RUN apt-get update \
&& apt-get install --no-install-recommends --assume-yes \
libxv1
...
Bind-mounting binaries or libraries into containers leads to not just filesystem inconsistencies like you describe but in some cases also binary-compatibility issues. The bind mount won't work properly on a MacOS host, for instance. (Earlier recipes for using the Docker socket inside a Docker container recommended bind-mounting /usr/bin/docker into the container, but this could hit problems if a CentOS host Docker was built against different shared libraries than an Ubuntu container Docker.)

volumes section in docker compose supports environment variables. You can make use of that and it will be machine specific.

Related

Docker: Where is "reset to factory defaults" on linux?

I've used Docker on Windows and macOS for the past couple years. Often, when things got really messed up, I found it faster to use the Reset to factory defaults option in the Docker GUI to do a clean reset than to troubleshoot whatever problem was giving me grief.
Now I'm using Ubuntu 20.04 and I can't find this option. I found a long list of commands to remove/reset individual components but where is the single command for this like Windows/macOS?
Use your OS's package manager to uninstall the Docker package; then
sudo rm -rf /var/lib/docker
That should completely undo all Docker-related things.
Note that the "Desktop" applications have many more settings (VM disk/memory size, embedded Kubernetes, ...). The native-Linux Docker installations tend to have very few, and generally the only way to set them is by directly editing the JSON configuration file in /etc. So "reset Docker" doesn't really tend to be an issue on native Linux.
As always, make sure you have an external copy of your images (in Docker Hub or a registry like ECR) or you can rebuild them from Dockerfiles, your containers are designed to tolerate being deleted and recreated, and if you use named volumes, you have backups of those.
You can use this command:
docker system prune -a
Description:
WARNING! This will remove:
- all stopped containers
- all networks not used by at least one container
- all images without at least one container associated to them
- all build cache

how docker works if the OS environment changes?

I am very new to docker. I have some very basic doubts on docker. suppose I am cloning a simple project from github. I want to create a docker image of that application. according to my OS(currently I am using linux), I am writing a docker file and building one image of that application, and from docker image I am creating a docker container. now my container is created. Now suppose my another team mate wants to deploy that docker image to other system of Windows/mac OS. now what are the procedures? means do I need to write the docker file again? or I need to pull the app from github again and following the same steps that I mentioned above? because in my dockerfile I have given commands according to my OS, but somebody wants to deploy the docker image in windows/mac.
And secondly, where is the image file located? in my local system it will not be there I know. how can I see the files/folders of the docker image?
I know this is very simple question to ask, still any help is highly appreciable.Thanks.
suppose I am cloning a simple project from github. I want to create a docker image of that application. according to my OS(currently I am using linux), I am writing a docker file and building one image of that application, and from docker image I am creating a docker container. now my container is created.
Just to be sure that this is clear. You have to consider the "Docker Image" as "a recipe" and a "Docker Container" as "a cake". You can make as many cakes as you like with a given recipe. The recipe is what you share if you want to be able to re-bake cakes.
Now suppose my another team mate wants to deploy that docker image to other system of Windows/mac OS. now what are the procedures? means do I need to write the docker file again? or I need to pull the app from github again and following the same steps that I mentioned above?
And thus it's the "image" that you will "share" with other developers and not the container. This can be done either by "pushing" the image to an online repository (e.g. https://hub.docker.com/) or by recreating the image every time from a Dockerfile.
because in my dockerfile I have given commands according to my OS, but somebody wants to deploy the docker image in windows/mac.
I would have to see what exactly you are doing, but it's good practice to make docker images independent from the host. Or at least make it configurable during the first creation of the image or execution of the container.
To give a concrete example, in our company we have a private rest api written in PHP. Everything runs on docker, whether it is on development or on production. Our production images can be run on any OS, however our dev image will be built slightly differently depending on the OS. Why? Because we need to configure the debugger.
If the image is built on Linux, the php setting xdebug.remote_host needs to point to localhost, however when using Docker For Mac, then the php setting needs to be docker.for.mac.localhost .
The Dockerfile looks partially like this:
FROM adsdaq/print-engine-fpm:7.3
ARG DOCKER_HOST_ADDR
ENV PHP_XDEBUG_REMOTE_HOST ${DOCKER_HOST_ADDR:-localhost}
COPY etc/dev/php/adsdaq.ini $PHP_INI_DIR/conf.d/
And in the adsdaq.ini we have
xdebug.remote_host = ${PHP_XDEBUG_REMOTE_HOST}
And to simplify the life of our devs, we have a Makefile which handles OS detection:
DOCKER_HOST ?= localhost
OPEN_BROWSER ?= open
UNAME_S := $(shell uname -s)
USERID=$(shell id -u)
GROUPID=$(shell id -g)
## Define variable depending on OS used, use xdg-open command
ifeq ($(UNAME_S),Linux)
OPEN_BROWSER = xdg-open
else ifeq ($(UNAME_S),Darwin)
ifneq (,$(wildcard /var/run/docker.sock))
DOCKER_HOST = docker.for.mac.localhost
endif
else
$(warning Your OS "$(UNAME_S)" is not supported and could not work as expected!)
endif
As showed here, the image will be built differently on Linux than on Mac OS for dev purposes, and that is fine as we don't need to push those images on any repo manager.
If you need to share the image on a repo, then I would make sure that the configuration can be changed dynamically through parameters and/or eventually the usage of an entrypoint script.
And secondly, where is the image file located? in my local system it will not be there I know. how can I see the files/folders of the docker image?
You cannot see the files/folders of the docker image. To see what's in the image you need to run a container as it will give you an instance of it! Remember the cake/recipe analogy .. you cannot see the content of the cake until you baked it using the recipe.
You can however see all images "stored" on your machine by doing docker images .
Hope this helps figuring things out. Don't hesitate to share your Dockerfile if you need more assistance.
You don't need to change anything. Lets say your current host is Linux on which you are running docker and you made a dockerfile. Now lets say you went to another PC which is running on windows. If docker is running on this windows and you want to build/run or whatever using your image file then you can do that without changing anything.
For more detailed/specific answer you will have to post the dockerfile

Docker - /bin/sh: <file> not found - bad ELF interpreter - how to add 32bit lib support to a docker image

UPDATE – Old question title:
Docker - How to execute unzipped/unpacked/extracted binary files during docker build (add files to docker build context)
--
I've been trying (half a day :P) to execute a binary extracted during docker build.
My dockerfile contains roughly:
...
COPY setup /tmp/setup
RUN \
unzip -q /tmp/setup/x/y.zip -d /tmp/setup/a/b
...
Within directory b is a binary file imcl
Error I'm getting was:
/bin/sh: 1: /tmp/setup/a/b/imcl: not found
What was confusing, was that displaying the directory b (inside the dockerfile, during build) before trying to execute the binary, showed the correct file in place:
RUN ls -la /tmp/setup/a/b/imcl
-rwxr-xr-x 1 root root 63050 Aug 9 2012 imcl
RUN file /tmp/setup/a/b/imcl
ELF 32-bit LSB executable, Intel 80386, version 1 (SYSV), dynamically linked (uses shared libs), for GNU/Linux 2.2.5, not stripped`
Being a Unix noob at first I thought it was a permission issue (root of the host being different than root of the container or something) but, after checking, the UID was 0 for both so it got even weirder.
Docker asks not to use sudo so I tried with su combinations:
su - -c "/tmp/setup/a/b/imcl"
su - root -c "/tmp/setup/a/b/imcl"
Both of these returned:
stdin: is not a tty
-su: /tmp/setup/a/b: No such file or directory
Well heck, I even went and defied Docker recommendations and changed my base image from debian:jessie to the bloatish ubuntu:14.04 so I could try with sudo :D
Guess how that turned out?
sudo: unable to execute /tmp/setup/a/b/imcl: No such file or directory
Randomly googling I happened upon a piece of Docker docs which I believe is the reason to all this head bashing:
"Note: docker build will return a no such file or directory error if the file or directory does not exist in the uploaded context. This may happen if there is no context, or if you specify a file that is elsewhere on the Host system. The context is limited to the current directory (and its children) for security reasons, and to ensure repeatable builds on remote Docker hosts. This is also the reason why ADD ../file will not work."
So my question is:
Is there a workaround to this?
Is there a way to add extracted files to docker build context during a build (within the dockerfile)?
Oh and the machine I'm building this is not connected to the internet...
I guess what I'm asking is similar to this (though I see no answer):
How to include files outside of Docker's build context?
So am I out of luck?
Do I need to unzip with a shell script before sending the build context to Docker daemon so all files are used exactly as they were during build command?
UPDATE:
Meh, the build context actually wasn't the problem. I tested this and was able to execute unpacked binary files during docker build.
My problem is actually this one:
CentOS 64 bit bad ELF interpreter
Using debian:jessie and ubuntu:14.04 as base images only gave No such file or directory error but trying with centos:7 and fedora:23 gave a better error message:
/bin/sh: /tmp/setup/a/b/imcl: /lib/ld-linux.so.2: bad ELF interpreter: No such file or directory
So that led me to the conclusion that this is actually the problem of running a 32-bit application on a 64-bit system.
Now the solution would be simple if I had internet access and repos enabled:
apt-get install ia32-libs
Or
yum install glibc.i686
However, I dont... :[
So the question becomes now:
What would be the best way to achive the same result without repos or internet connection?
According to IBM, the precise libraries I need are gtk2.i686 and libXtst.i686 and possibly libstdc++
[root#localhost]# yum install gtk2.i686
[root#localhost]# yum install libXtst.i686
[root#localhost]# yum install compat-libstdc++
UPDATE:
So the question becomes now:
What would be the best way to achive the same result without repos or internet connection?
You could use various non-official 32-bit images available on DockerHub, search for debian32, ubuntu32, fedora32, etc.
If you can't trust them, you can build such an image by yourself, and you can find instruction on DockerHub too, e.g.:
on f69m/ubuntu32 home page, there is a link to GitHub repo used to generate images;
on hugodby/fedora32 home page, there is an example of commands used to build the image;
and so on.
Alternatively, you can prepare your own image based on some official image and add 32-bit packages to it.
Say, you can use a Dockerfile like this:
FROM debian:wheezy
ENV DEBIAN_FRONTEND noninteractive
RUN apt-get update
RUN apt-get install -y ia32-libs
...and use produced image as a base (with FROM directive) for images you're building without internet access.
You can even create an automated build on DockerHub that will rebuild your image automatically when your Dockerfile (posted, say, on GitHub) or mainline image (debian in the example above) changes.
No matter how did you obtain an image with 32-bit support (used existing non-official image or built your own), you can then store it to a tar archive using docker save command and then import using docker load command.
You're in luck! You can do this using the ADD command. The docs say:
If <src> is a local tar archive in a recognized compression format
(identity, gzip, bzip2 or xz) then it is unpacked as a directory... When a directory is
copied or unpacked, it has the same behavior as tar -x: the result is
the union of:
Whatever existed at the destination path and
The contents of the
source tree, with conflicts resolved in favor of “2.” on a
file-by-file basis.

How to deploy a Docker image to make changes in the local environment?

EDIT +2=Just fyi, i am a root user which means i do not have type out superuser do (sudo) every time i do a authorized only cmd.
Alright so after about 24 hours of researching Docker i am a little upset if i got my facts straight.
As a quick recap, docker serves as a way to write code or configuration file changes for a specific web service, run environment, virtual machines, all from the cozy confines of a linux terminal/text file. This is beyond a doubt an amazing feature: to have code or builds you made on one computer work on an unlimited number of other machines is truly a breakthrough. While i am annoyed that the terminology is wrong with respect to whats containers and what are images (images are save points of layers of code that are made from dockers servers or can be created from containers which require a base image to go off of. Dockerfiles serve as a way to automate the build process of making images by running all the desired layers and roll them into one image so it can be accessed easily.).
See the catch is with docker is that "sure it can be deployed on a variety of different operating systems and use their respective commands". But those commands do not really come to pass on say something like the local environment though. While running some tests on a dockerbuild working with centos, the basic command structure goes
FROM centos
RUN yum search epel
RUN yum install -y epel-release.noarch
RUN echo epel installed!
So this works within the docker build and says it succesfully installs it.
The same can be said with ubuntu by running an apt-cache instead of yum. But going back to the centos VM, it DOES NOT state that epel has been installed because when attempting to run the command of
yum remove epel-release.noarch
it says "no packages were to be removed yet there is a package named ...". So then, if docker is able to be multi-platform why can it not actually create those changes on the local platform/image we are targeting? The docker builds run a simulation of what is going to happen on that particular environment but i can not seem to make it come to pass. This just defeats one of my intended purposes of the docker if it can not change anything local to the system one is using, unless i am missing something.
Please let me know if anyone has a solution to this dilemma.
EDIT +1=Ok so i figured out yesterday what i was trying to do was to view and modify the container which can be done by doing either docker logs containerID or docker run -t -i img /bin/sh which would put me into an interactive shell to make container changes there. Still, i want to know if theres a way to make docker comunicate to the local environment from within a container.
So, I think you may have largely missed the point behind Docker, which is the management of containers that are intentionally isolated from your local environment. The idea is that you create containerized applications that can be run on any Docker host without needing to worry about the particular OS installed or configuration of the host machine.
That said, there are a variety of ways to break this isolation if that's really what you want to do.
You can start a container with --net=host (and probably --privileged) if you want to be able to modify the host network configuration (including interface addresses, routing tables, iptables rules, etc).
You can parts of (or all of) the host filesystem as volumes inside the container using the -v command line option. For example, docker run -v /:/host ... would expose the root of your host filesystem as /host inside the container.
Normally, Docker containers have their own PID namespace, which means that processes on the host are not visible inside the container. You can run a container in the host PID namespace by using --pid=host.
You can combine these various options to provide as much or as little access to the host as you need to accomplish your particular task.
If all you're trying to do is install packages on the host, a container is probably the wrong tool for the job.

Within lxc/docker container - what happens if apt-get upgrade includes kernel update?

I am reading a lot of Docker guides where the will often use some Ubuntu base image and in the Dockerfile directly or in a bash script that gets copy to container and run on start, it has things like 'apt-get upgrade'
As i understand it, the container still uses the hosts kernel. So what happens when the apt-get upgrade includes a kernel upgrade? Does it create a /boot and install the files as usual but the underlying LXC has some pass-through/whitelist mechanism for specific directories that always come from host... so it ignores those files in guest container ?
Thanks
fLo
The host's /boot is not visible to a Docker container, and the kernel image package should not be installed in such a container, since it's not needed. (Even if it is, though, it's entirely inert.)

Resources