I noticed that apt-get purge doesn't always clean every trace of an installation.
As a concrete example, I'm trying to remove mailman (installed by apt-get install mailman).
Now, I tried to remove every(!) trace of this installation by apt-get purge mailman.
find / -name '*mailman*' reveals that there is still some stuff from mailman around:
/run/mailman
/var/cache/apt/archives/mailman_1%3a2.1.16-2_amd64.deb
/var/spool/postfix/private/mailman
/usr/share/bash-completion/completions/mailmanctl
/usr/share/locale-langpack/en_GB/LC_MESSAGES/mailman.mo
/usr/share/locale-langpack/en_AU/LC_MESSAGES/mailman.mo
Also, the installation created an additional user "list" and a group "list" that I'm quite sure wasn't there before.
So, I was wondering how thorough apt-get purge is? In other words, in what way might apt-get install X; apt-get purge X; change my system? And are there more thorough methods?
When building a package, the files generated by the build process are registered. When you uninstall a package, these files will be removed (remove keeps config files, purge removes them, too).
If a program itself generates files, the package manager usually doesn't know about this, and won't touch these files.
Also, third party addons, suggested packages or dependencies may create files which do not belong to the removed/purged package, even if they don't make any sense without it.
To find out to which package a file belongs, use apt-file:
apt-file search /path/to/file
You may need to install apt-file and update its database:
sudo apt-get install apt-file && sudo apt-file update
You can use
apt-get --purge remove PACKAGE instead of apt-get purge
apt-get --purge remove mailman
AND this will remove the dependencies
sudo apt-get autoremove mailman
Related
We (my workplace) have an embedded product based around an NVIDIA TK1 which is running a custom build of Ubuntu for ARM. As part of the setup routine for our product, we have a custom script which downloads a large number of packages and archives from the web, extracts and installs them.
Ideally what we are looking to do is to pre-download these packages and archives into a "local" repository so that our product uses known versions which work with our application. Auto-updating of packages is disabled and the end product will rarely have access to the internet anyway, we just want to ensure that the versions of packages used remains the same for EVERY product shipped.
As an example, here are parts of the update script:
sudo apt-get install -y linux-firmware
sudo apt-get install -y '.*libxcb.*' libxrender-dev libxi-dev libfontconfig1-dev libudev-dev libx11-dev libx11-xcb-dev libext-dev libgstreamer0.10-dev libgstreamer-plugins-base0.10-dev
sudo apt-get install -y gcc-4.9 g++-4.9
sudo apt-get install -y ros-indigo-desktop
# This one is shortened as its a long URL
wget cuda-repo-l4t-r21.1-6-5-prod_6.5-14_armhf.deb
sudo dpk -i cuda-repo-l4t-r21.1-6-5-prod_6.5-14_armhf.deb
Obviously, a large number of the packages will have a lot of dependencies so if there is a way of downloading ALL required packages and ALL required dependencies into a local package repository and change my scripts so they are installed from there, that would the ideal.
I'm unsure as which way would be the best way to approach this.
I have had suggestions of installing all packages onto a base product then "cloning" the file system and pushing it onto other modules however I don't really know the pros/cons of doing this.
UPDATE
Ok, so I've since found a large number of packages in the /var/cache/apt/archives/ folder which seem to all relate to what is installed from our script.
Is it feasible/safe to install ALL of these packages using sudo dpkg -i *.deb?
A good thing would be to use some kind of debootstrap to build a custom image and then flash it on the devices.
To cache the apt-get there is apt-cacher. I haven't tried, but its cache can be freezed.
For the local repositories - reprepro.
I've seen two ways to install packages,for example,squid on CentOS:
1.yum -y install squid
2.yum install squid
can anyone tell me what's the difference between them ?
also, I'm using CentOS v.6.6
If you supply -y it automatically chooses "yes" for future questions, i.e. are you sure you want to install squid? [Y/n]?.
It is handy if the installation takes a long time and asks multiple questions, which happens when you install multiple programs at once. In that case, having to type enter every now and again for the process to continue can be annoying.
For a full list of yum options and their definitions take a look at the help message for yum:
yum -h
With -y option, yum will install specified package along with its dependent package without asking for confirmation.
Without -y option, yum will show information related to specified package and its dependent packages and will ask for confirmation to install.
-y option will be useful if package is going to be installed through some scripts.
I am a newbie on openSUSE. I needed to get build-essential for the system but could not get it using sudo apt-get install build-essential or even by using sudo apt-get update and then following it with the previous code. I found a way to install most packages of build essential through sudo zypper install -t pattern devel_basis. But however, I am not able to obtain libframe package !! I can't directly download it because mine is an account on the office computer and I don't have the root access.
I am also attaching the screenshot of my terminal.. The error is towards the end.
Screenshot
zypper info -t pattern devel_basis to see what packages have the pattern
zypper install -t pattern devel_basis to install these packages
thanks to what-is-build-essentials-for-opensuse
I'm not sure if this requires root login or just maybe sudo privileges. These can be granted by ur system administrator. You may just need to ask for them.
You need to add the repository first before trying the zypper install
The command is
zypper ar -f URL alias
where
ar is short form of addrepo command
-f instruction to zypper to add autorefresh flag to newly added repo
URL is URL of the repo which you type in a browser to visit repo
in this case (itc): http://download.opensuse.org/distribution/leap/42.2/repo/oss/
alias is name that is easy to remember. itc: openSUSE:Leap:42.2
so...
zypper ar -f http://download.opensuse.org/distribution/leap/42.2/repo/oss/ openSUSE:Leap:42.2
I had a similar problem with ubuntu/debian command.
sudo apt-get build-essential libssl-devel
it turns out in Suse build-essential and libssl-devel would be devel_basis and openssl-devel respectively. to install I then searched on google for just openssl-devel (as it was all that I needed at that time), and followed the link from https://software.opensuse.org. Hope this helps
Often I install softwares/packages using apt-get.
If the installation is stopped or interrupted anyhow,
then how to find and remove the partially installed files? Besides if I install the same package later, will the apt-get installation process create duplicate files?
Try this:
sudo apt-get -f install
And then:
update, upgrade, reinstall, etc...
Sorry, very new to server stuff, but very curious. Why run apt-get update when building a container?
My guess would be that it's for security purposes, if that the case than that'll answer the question.
apt-get update ensures all package sources and dependencies are at their latest version, it does not update existing packages that have been installed. It's recommended that you always run apt-get update prior to running an apt-get install this is so when the apt-get install is run, the latest version of the package should be used.
RUN apt-get update -q -y && apt-get install -q -y <your-program>
(the -q -y flags just mean that the apt process will run quietly without asking you for confirmations as this would cause the Docker process to fail)
First, lets make a distinction between apt-get update and apt-get upgrade. The update is to get the latest package index. This is so that you don't run into errors for outdated or redacted packages when doing a apt-get install.
The upgrade is actually going through an upgrading packages. It usually also requires a preceding update to have the updated package index. This might be done if there are package or security concerns of already installed packages.
You usually see an update a lot in builds because the base image may have a fairly out of date package index and just doing an apt-get install can fail.
The upgrade would be less common. But could still be done if you want to ensure the latest packages are installed.