In Docker, why is it recommended to run `apt-get` update in the Dockerfile? - security

Sorry, very new to server stuff, but very curious. Why run apt-get update when building a container?
My guess would be that it's for security purposes, if that the case than that'll answer the question.

apt-get update ensures all package sources and dependencies are at their latest version, it does not update existing packages that have been installed. It's recommended that you always run apt-get update prior to running an apt-get install this is so when the apt-get install is run, the latest version of the package should be used.
RUN apt-get update -q -y && apt-get install -q -y <your-program>
(the -q -y flags just mean that the apt process will run quietly without asking you for confirmations as this would cause the Docker process to fail)

First, lets make a distinction between apt-get update and apt-get upgrade. The update is to get the latest package index. This is so that you don't run into errors for outdated or redacted packages when doing a apt-get install.
The upgrade is actually going through an upgrading packages. It usually also requires a preceding update to have the updated package index. This might be done if there are package or security concerns of already installed packages.
You usually see an update a lot in builds because the base image may have a fairly out of date package index and just doing an apt-get install can fail.
The upgrade would be less common. But could still be done if you want to ensure the latest packages are installed.

Related

Google Colab Git-Annex Update Issue

I am trying to use Google Colab for a fMRI experiment, however, I have been struggling with incompatibilities between package versions (for datalad). At first, git did not update with sudo-apt get and I found the solution with this.
sudo add-apt-repository ppa:git-core/ppa -y
sudo apt-get update
sudo apt-get install git -y
git --version
Even though git is updated, git-annex does not seem to update, whilst trying to update with this
sudo apt-get update -y
sudo apt-get install -y git-annex
it gives this error:
"git-annex is already the newest version (6.20180227-1)." But the dependency for datalad requires git-annex at least to be 8.
The git configuration is already initiated (user name and email).
I would deeply appreciate any opinions on how to fix this issue.
Thank you so much in advance.

Problem with moby packages when installing docker-ce on CentOS 7

I have a docker image for CentOS 7 which installs docker-ce via the recommended instructions.
i.e.
RUN yum-config-manager \
--add-repo \
https://download.docker.com/linux/centos/docker-ce.repo
RUN yum install -y yum-utils \
device-mapper-persistent-data \
lvm2
RUN yum update -y
RUN yum install -y docker-ce docker-ce-cli containerd.io
Recently this stopped working and now fails as follows:
--> Processing Conflict: moby-containerd-1.3.6+azure-1.x86_64 conflicts containerd
--> Processing Conflict: moby-runc-1.0.0~rc10+azure-2.x86_64 conflicts runc
--> Finished Dependency Resolution
Error: moby-containerd conflicts with containerd.io-1.2.13-3.2.el7.x86_64
Error: moby-runc conflicts with containerd.io-1.2.13-3.2.el7.x86_64
You could try using --skip-broken to work around the problem
You could try running: rpm -Va --nofiles --nodigest
The command '/bin/sh -c yum install -y docker-ce docker-ce-cli containerd.io' returned a non-zero code: 1
If the install command is replaced with:
yum install -y docker
I get a different error to do with an unsigned package:
Package runc is obsoleted by moby-runc, trying to install moby-runc-1.0.0~rc10+azure-2.x86_64 instead
...
Package moby-runc-1.0.0~rc10+azure-2.x86_64.rpm is not signed
I tried forcing use of a few old versions as below to no avail e.g.
RUN yum install -y docker-1.13.1-102.git7f2769b.el7.centos
Why is this happening? How can I fix it? And how can I prevent similar problems in the future?
Update: A critical missing piece of information from this question is the use of Azure. I had the following as aspnetcore is required to publish packages in an Azure devops pipeline:
RUN rpm -Uvh https://packages.microsoft.com/config/centos/7/packages-microsoft-prod.rpm
RUN yum update -y && yum install aspnetcore-runtime-3.1 -y
my repo's needed to be updated --- the following resolved me:
curl https://packages.microsoft.com/config/rhel/7/prod.repo > ./microsoft-prod.repo
sudo cp ./microsoft-prod.repo /etc/yum.repos.d/
yum update -y
https://learn.microsoft.com/en-us/windows-server/administration/linux-package-repository-for-microsoft-software
The Answer from user8475213 worked for me. But I had to run after these commands also:
yum clean metadata
Caveat: Usually disclaimers apply. This is only what I think, I may be mistaken. Please comment / suggest edits if so.
How can I fix it?
This was actually a problem with the baseurl of a repo not the docker-ce repo (though I did originally file a bug report there (see issue #11198).
The moby packages come from "packages-microsoft-com-prod".
It seems the baseurl has changed.
The correct one is now:
baseurl=https://packages.microsoft.com/rhel/7/prod/
installed via:
RUN curl https://packages.microsoft.com/config/rhel/7/prod.repo >/etc/yum.repos.d/microsoft-prod.repo
The one with the dodgy packages is:
baseurl=https://packages.microsoft.com/centos/7/prod/
Installed via:
RUN rpm -Uvh https://packages.microsoft.com/config/centos/7/packages-microsoft-prod.rpm
The moby packages exist only in the CentOS repo which is possibly defunct, as Microsoft themselves have changed installation documentation in various places.
There was also a workaround. You can exclude the 'moby' packages until they are really ready as in:
RUN yum install -y docker --exclude=moby-\*
Why is this happening?
I think this is caused by overly aggressive promotion of moby replacements for docker-ce functionality.
It is hopefully a transient state while they are working things out.
That the package moby-runc-1.0.0~rc10+azure-2.x86_64.rpm is not signed suggests a problem with the build process used. This package ought only to be available to beta testers. Certainly not in a repository marked "stable".
How can I prevent similar problems in the future?
Contrary to popular myths, using docker does not completely isolate you from changes in the environment. The repositories used by your docker files are themselves part of the environment. If that environment changes, as in this case, then your reproducible build may cease to be reproducible. The only real way to avoid that is to host your own repositories which comes at a high price. Usually external repositories are stable enough that this is not an issue.
You should consider specifying specific versions of packages to install in your dockerfile to avoid getting unexpected upgrades. However, that will not help you in cases like this where a package is obsoleted and replaced.
Related problem in RHEL 8
Seems that azure depends on packages from the container-tools module, and Docker conflicts with these packages.
# dnf remove #container-tools

Ubuntu Linux - Install Packages from Local Repository

We (my workplace) have an embedded product based around an NVIDIA TK1 which is running a custom build of Ubuntu for ARM. As part of the setup routine for our product, we have a custom script which downloads a large number of packages and archives from the web, extracts and installs them.
Ideally what we are looking to do is to pre-download these packages and archives into a "local" repository so that our product uses known versions which work with our application. Auto-updating of packages is disabled and the end product will rarely have access to the internet anyway, we just want to ensure that the versions of packages used remains the same for EVERY product shipped.
As an example, here are parts of the update script:
sudo apt-get install -y linux-firmware
sudo apt-get install -y '.*libxcb.*' libxrender-dev libxi-dev libfontconfig1-dev libudev-dev libx11-dev libx11-xcb-dev libext-dev libgstreamer0.10-dev libgstreamer-plugins-base0.10-dev
sudo apt-get install -y gcc-4.9 g++-4.9
sudo apt-get install -y ros-indigo-desktop
# This one is shortened as its a long URL
wget cuda-repo-l4t-r21.1-6-5-prod_6.5-14_armhf.deb
sudo dpk -i cuda-repo-l4t-r21.1-6-5-prod_6.5-14_armhf.deb
Obviously, a large number of the packages will have a lot of dependencies so if there is a way of downloading ALL required packages and ALL required dependencies into a local package repository and change my scripts so they are installed from there, that would the ideal.
I'm unsure as which way would be the best way to approach this.
I have had suggestions of installing all packages onto a base product then "cloning" the file system and pushing it onto other modules however I don't really know the pros/cons of doing this.
UPDATE
Ok, so I've since found a large number of packages in the /var/cache/apt/archives/ folder which seem to all relate to what is installed from our script.
Is it feasible/safe to install ALL of these packages using sudo dpkg -i *.deb?
A good thing would be to use some kind of debootstrap to build a custom image and then flash it on the devices.
To cache the apt-get there is apt-cacher. I haven't tried, but its cache can be freezed.
For the local repositories - reprepro.

Uncompleted installation files using apt-get

Often I install softwares/packages using apt-get.
If the installation is stopped or interrupted anyhow,
then how to find and remove the partially installed files? Besides if I install the same package later, will the apt-get installation process create duplicate files?
Try this:
sudo apt-get -f install
And then:
update, upgrade, reinstall, etc...

What traces of an application can apt-get purge still leave behind?

I noticed that apt-get purge doesn't always clean every trace of an installation.
As a concrete example, I'm trying to remove mailman (installed by apt-get install mailman).
Now, I tried to remove every(!) trace of this installation by apt-get purge mailman.
find / -name '*mailman*' reveals that there is still some stuff from mailman around:
/run/mailman
/var/cache/apt/archives/mailman_1%3a2.1.16-2_amd64.deb
/var/spool/postfix/private/mailman
/usr/share/bash-completion/completions/mailmanctl
/usr/share/locale-langpack/en_GB/LC_MESSAGES/mailman.mo
/usr/share/locale-langpack/en_AU/LC_MESSAGES/mailman.mo
Also, the installation created an additional user "list" and a group "list" that I'm quite sure wasn't there before.
So, I was wondering how thorough apt-get purge is? In other words, in what way might apt-get install X; apt-get purge X; change my system? And are there more thorough methods?
When building a package, the files generated by the build process are registered. When you uninstall a package, these files will be removed (remove keeps config files, purge removes them, too).
If a program itself generates files, the package manager usually doesn't know about this, and won't touch these files.
Also, third party addons, suggested packages or dependencies may create files which do not belong to the removed/purged package, even if they don't make any sense without it.
To find out to which package a file belongs, use apt-file:
apt-file search /path/to/file
You may need to install apt-file and update its database:
sudo apt-get install apt-file && sudo apt-file update
You can use
apt-get --purge remove PACKAGE instead of apt-get purge
apt-get --purge remove mailman
AND this will remove the dependencies
sudo apt-get autoremove mailman

Resources