Within lxc/docker container - what happens if apt-get upgrade includes kernel update? - linux

I am reading a lot of Docker guides where the will often use some Ubuntu base image and in the Dockerfile directly or in a bash script that gets copy to container and run on start, it has things like 'apt-get upgrade'
As i understand it, the container still uses the hosts kernel. So what happens when the apt-get upgrade includes a kernel upgrade? Does it create a /boot and install the files as usual but the underlying LXC has some pass-through/whitelist mechanism for specific directories that always come from host... so it ignores those files in guest container ?
Thanks
fLo

The host's /boot is not visible to a Docker container, and the kernel image package should not be installed in such a container, since it's not needed. (Even if it is, though, it's entirely inert.)

Related

Using mount command while Docker build

So this is not about seeking workarounds to -v.
I have a Dockerfile whose intent is to install a cross-compiler in /usr/local/<cross-compiler-path>, inside the container. Later during a build process, a file would be mounted to this cross-compiler, like this:
root#5bee5daf8165:/# mount <blah.img.gz> /usr/local/<cross-compiler-path>
I get mount: /usr/local/<cross-compiler-path>: mount failed: Operation not permitted.
Although if I skip this step, finish build, run a --privileged container and mount, it works fine.
I understand the reason for not giving privileged mode in the build since it breaks the 'portability' of containers as they depend on host volumes. But in my case, I am attempting to mount it inside the Container's own file system. Why is that not allowed?
For the record, I tried installing the cross-compiler on a different path, like this:
root#5bee5daf8165:/# mount <blah.img.gz> /home/<cross-compiler-path>
But that doesn't work either. I want to attempt the build inside the Dockerfile and discard the build cache which bloat up my container once I no longer need them. What options do I have?
As mentioned in "Can You Mount a Volume While Building Your Docker Image to Cache Dependencies?" from Vladislav Supalov
Although there’s no functionality in Docker to have volumes at build-time, you can use multi-stage builds, benefit from Docker caching and save time by copying data from other images - be it multi-stage or tagged ones.
When building an image, you can’t mount a volume. However, you can copy (COPY) data from another image! By combining this, with a multi-stage build, you can pre-compute an expensive operation once, and re-use the resulting state as a starting point for future iterations.
Example:
FROM ubuntu as intermediate
RUN apt-get install -yqq python-dev python-virtualenv
RUN virtualenv /venv/
RUN mkdir -p /src
# those don't change often
ADD code/basic-requirements.txt /src/basic-requirements.txt
RUN /venv/bin/pip install -r /src/basic-requirements.txt
FROM ubuntu
RUN apt-get install -yqq python-dev python-virtualenv
# the data comes from the above container
COPY --from=intermediate /venv /venv
ADD code/requirements.txt /src/requirements.txt
# this command, starts from an almost-finished state every time
RUN /venv/bin/pip install -r /app/requirements.txt
The OP add in the comments:
I want to mount a volume internally to the container fs using the mount command while build, which currently doesn't work.
Just wanted to know if 'mount' operation, in general is tied to the kernel?
Kernel or not, using mount directly (outside of the sanctioned volumes) is not allowed for security reason, as described here by BMitch.
Docker removes the mount privilege from containers because using this you could mount the host filesystem and escape the container.
If you really need to mount something during the build process, you might consider buildah, which can build without running a container for each layer (like docker build does), and can do so without being root.
Use ONBUILD to read your existing Dockerfile.
Note that with "buildah mount, you can do the reverse: Mounts the specified container's root file system in a location which can be accessed from the host, and returns its location.
That is another alternative.

Docker: Where is "reset to factory defaults" on linux?

I've used Docker on Windows and macOS for the past couple years. Often, when things got really messed up, I found it faster to use the Reset to factory defaults option in the Docker GUI to do a clean reset than to troubleshoot whatever problem was giving me grief.
Now I'm using Ubuntu 20.04 and I can't find this option. I found a long list of commands to remove/reset individual components but where is the single command for this like Windows/macOS?
Use your OS's package manager to uninstall the Docker package; then
sudo rm -rf /var/lib/docker
That should completely undo all Docker-related things.
Note that the "Desktop" applications have many more settings (VM disk/memory size, embedded Kubernetes, ...). The native-Linux Docker installations tend to have very few, and generally the only way to set them is by directly editing the JSON configuration file in /etc. So "reset Docker" doesn't really tend to be an issue on native Linux.
As always, make sure you have an external copy of your images (in Docker Hub or a registry like ECR) or you can rebuild them from Dockerfiles, your containers are designed to tolerate being deleted and recreated, and if you use named volumes, you have backups of those.
You can use this command:
docker system prune -a
Description:
WARNING! This will remove:
- all stopped containers
- all networks not used by at least one container
- all images without at least one container associated to them
- all build cache

Docker compose file for linux Debian and Arch

Hope I'm doing this correctly...
First of we are using docker-compose with a yml file.
Looking something like that:
sudo docker-compose -f docker-compose.yml up -d
In the yml file we have something similar to:
version: '3.4'
services:
MyContainer:
image: "MyContainer:latest"
container_name: MyContainer
restart: always
environment:
- DISPLAY=unix$DISPLAY
- QT_X11_NO_MITSHM=1
devices:
- /dev/dri:/dev/dri
volumes:
- /tmp/.X11-unix:/tmp/.X11-unix:rw
- /dev/dri:/dev/dri
- /usr/lib/x86_64-linux-gnu/libXv.so.1:/usr/lib/x86_64-linux-gnu/libXv.so.1:rw
- ~/MyFiles/:/root/Myfiles
- ~:/root/home
Now the problem starts. We have two operating systems used by the team. One time Ubuntu and then Arch and Manjaro. As a experienced Linux User might know this will not work on Arch. Because x86_64-linux-gnu is a folder in Ubuntu. This is a very specific folder on Debian/Ubuntu systems. The equivalent on Arch/Manjaro and nearly every other Linux Distro is /usr/lib or /usr/lib64.
Of course a hack would be to make this folder to link into lib, but I don't want to do that for every new team-member/machine without Ubuntu.
So these are all the upfront information to give.
My Question is:
What is the best approach in your opinion to solve this problem?
I had a google search, but either I used the wrong keywords, or people don't have that problem, because they design their containers smarter.
I know that there are docker volumes that can be created and then used in the docker-compose file but for that, we would need to rerun the setup on all PC's, Laptops and Servers we have, would like to avoid that if possible...
I have a lot to learn so if you have more experience and knowledge please be so kind and explain me my mistakes.
Regards,
Stefan
If you're trying to use the host display, host libraries, host filesystem, and host hardware devices, then the only thing you're getting out of Docker is an inconvenient packaging mechanism that requires root privileges to run. It'd be significantly easier to build a binary and run the application directly on the host.
If you must run this in Docker, the image should be self-contained: all of the code and libraries necessary to run the application needs to be in the image and copied in the Dockerfile. Most images start FROM some Linux distribution (maybe indirectly through a language runtime) and so you need to install the required libraries using its package manager.
FROM ubuntu:18.04
RUN apt-get update \
&& apt-get install --no-install-recommends --assume-yes \
libxv1
...
Bind-mounting binaries or libraries into containers leads to not just filesystem inconsistencies like you describe but in some cases also binary-compatibility issues. The bind mount won't work properly on a MacOS host, for instance. (Earlier recipes for using the Docker socket inside a Docker container recommended bind-mounting /usr/bin/docker into the container, but this could hit problems if a CentOS host Docker was built against different shared libraries than an Ubuntu container Docker.)
volumes section in docker compose supports environment variables. You can make use of that and it will be machine specific.

How to deploy a Docker image to make changes in the local environment?

EDIT +2=Just fyi, i am a root user which means i do not have type out superuser do (sudo) every time i do a authorized only cmd.
Alright so after about 24 hours of researching Docker i am a little upset if i got my facts straight.
As a quick recap, docker serves as a way to write code or configuration file changes for a specific web service, run environment, virtual machines, all from the cozy confines of a linux terminal/text file. This is beyond a doubt an amazing feature: to have code or builds you made on one computer work on an unlimited number of other machines is truly a breakthrough. While i am annoyed that the terminology is wrong with respect to whats containers and what are images (images are save points of layers of code that are made from dockers servers or can be created from containers which require a base image to go off of. Dockerfiles serve as a way to automate the build process of making images by running all the desired layers and roll them into one image so it can be accessed easily.).
See the catch is with docker is that "sure it can be deployed on a variety of different operating systems and use their respective commands". But those commands do not really come to pass on say something like the local environment though. While running some tests on a dockerbuild working with centos, the basic command structure goes
FROM centos
RUN yum search epel
RUN yum install -y epel-release.noarch
RUN echo epel installed!
So this works within the docker build and says it succesfully installs it.
The same can be said with ubuntu by running an apt-cache instead of yum. But going back to the centos VM, it DOES NOT state that epel has been installed because when attempting to run the command of
yum remove epel-release.noarch
it says "no packages were to be removed yet there is a package named ...". So then, if docker is able to be multi-platform why can it not actually create those changes on the local platform/image we are targeting? The docker builds run a simulation of what is going to happen on that particular environment but i can not seem to make it come to pass. This just defeats one of my intended purposes of the docker if it can not change anything local to the system one is using, unless i am missing something.
Please let me know if anyone has a solution to this dilemma.
EDIT +1=Ok so i figured out yesterday what i was trying to do was to view and modify the container which can be done by doing either docker logs containerID or docker run -t -i img /bin/sh which would put me into an interactive shell to make container changes there. Still, i want to know if theres a way to make docker comunicate to the local environment from within a container.
So, I think you may have largely missed the point behind Docker, which is the management of containers that are intentionally isolated from your local environment. The idea is that you create containerized applications that can be run on any Docker host without needing to worry about the particular OS installed or configuration of the host machine.
That said, there are a variety of ways to break this isolation if that's really what you want to do.
You can start a container with --net=host (and probably --privileged) if you want to be able to modify the host network configuration (including interface addresses, routing tables, iptables rules, etc).
You can parts of (or all of) the host filesystem as volumes inside the container using the -v command line option. For example, docker run -v /:/host ... would expose the root of your host filesystem as /host inside the container.
Normally, Docker containers have their own PID namespace, which means that processes on the host are not visible inside the container. You can run a container in the host PID namespace by using --pid=host.
You can combine these various options to provide as much or as little access to the host as you need to accomplish your particular task.
If all you're trying to do is install packages on the host, a container is probably the wrong tool for the job.

Install chromium to Linux disk image?

I'm sure this has been asked before but I have no clue what to search for
I am trying to create a custom Linux image (for the Raspberry Pi) - I am currently manipulating the filesystem of the .img but I've discovered it's not as simple as dropping in the binary :( if only...
What is the accepted way to "pre-install" a package on a disk image where you can only manipulate the filesystem and ideally not run it first? Am I best to boot up, install, and then create the image from that, or is there a way of doing it beforehand in the same way you can change configuration settings etc?
Usually, when I have to change something in a disk image, I do the following:
sudo mount --bind /proc /mnt/disk_image/proc
sudo mount --bind /sys /mnt/disk_image/sys
sudo mount --bind /dev /mnt/disk_image/dev
These action are needed as this folder are create during boot process, mounting them in your system image will emulate a full boot. Then, you can chroot on it safely:
sudo chroot /mnt/disk_image
You're now able to issue commands in the chroot environment:
sudo apt-get install chromium
Of course, change /mnt/disk_image to the path where you have mounted your filesystem. apt-get will only works on Debian based system, change it according to your distribution.
You could find problem connecting to the internet and it can be cause by DNS configuration. The best thing you can do, is to copy your /etc/resolv.conf file in the remote filesystem as this file is usually changed by dhcp and it's empty on chroot environment.
This is the only solution that gives you full access to the command line of the system you're trying to modify.
This is an untested idea:
The dpkg tool, which can install .deb packages, has a --root option which can set a different filesystem than the local / path.
From the man page:
--instdir=dir
Change default installation directory which refers to the
directory where packages are to be installed. instdir is
also the directory passed to chroot(2) before running
package’s installation scripts, which means that the
scripts see instdir as a root directory. (Defaults to /)
--root=dir
Changing root changes instdir to dir and admindir to
dir/var/lib/dpkg.
If you mount your image and pass its mountpoint as --root, it should work.
There are things like the Ubuntu Customization Kit which allow you to create your own version of the distro with your own packages.
Crunchbang even has a utility like this, which is the distro I have personally selected for experimenting with my Pi.

Resources